|
on Econometrics |
By: | Peter R. Hansen (Stanford University, Department of Economics, 579 Serra Mall, Stanford, CA 94305-6072, USA & CREATES); Asger Lunde (Aarhus University, School of Economics and Management, Bartholins Allé 10, Aarhus, Denmark & CREATES) |
Abstract: | An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized (volatility) measures, such as the realized variance, that are imperfect estimates of actual volatility. In an empirical analysis using realized measures for the DJIA stocks we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our application despite the large sample. Unit root tests based on the IV estimator have better finite sample properties in this context. |
Keywords: | Persistence, Autocorrelation Function, Measurement Error, Instrumental Variables, Realized Variance, Realized Kernel, Volatility |
JEL: | C10 C22 C80 |
Date: | 2010–02–04 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2010-08&r=ecm |
By: | Francq, Christian; Zakoian, Jean-Michel |
Abstract: | We establish the strong consistency and asymptotic normality of the quasi-maximum likelihood estimator of the parameters of a class of multivariate GARCH processes. The conditions are mild and coincide with the minimal ones in the univariate case. In particular, contrary to the current literature on the estimation of multivariate GARCH models, no moment assumption is made on the observed process. Instead, we require strict stationarity, for which a necessary and sufficient condition is established. |
Keywords: | Asymptotic Normality; Conditional Heteroskedasticity; Consistency; Constant Conditional Correlation; Multivariate GARCH; Quasi Maximum Likelihood Estimation; Strict Stationarity Condition |
JEL: | C13 C32 C01 |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:20779&r=ecm |
By: | Marcus J Chambers; Maria Kyriacou |
Abstract: | This paper analyses the properties of jackknife estimators of the first-order autoregressive coefficient when the time series of interest contains a unit root. It is shown that, when the sub-samples do not overlap, the sub-sample estimators have different limiting distributions from the full-sample estimator and, hence, the jackknife estimator in its usual form does not eliminate fully the first-order bias as intended. The joint moment generating function of the numerator and denominator of these limiting distributions is derived and used to calculate the expectations that determine the optimal jackknife weights. Two methods of avoiding this procedure are proposed and investigated, one based on inclusion of an intercept in the regressions, the other based on adjusting the observations in the sub-samples. Extensions to more general augmented Dickey-Fuller (ADF) regressions are also considered. In addition to the theoretical results extensive simulations reveal the impressive bias reductions that can be obtained with these computationally simple jackknife estimators and they also highlight the importance of correct lag-length selection in ADF regressions. |
Date: | 2010–02–04 |
URL: | http://d.repec.org/n?u=RePEc:esx:essedp:685&r=ecm |
By: | Gordon C.R. Kemp; J.M.C. Santos Silva |
Abstract: | We propose a semi-parametric mode regression estimator for the case in which the variate of interest is continuous and observable over its entire un- bounded support. The estimator is semi-parametric in that the conditional mode is specified as a parametric function, but only mild assumptions are made about the nature of the conditional density of interest. We show that the proposed estimator is consistent and has a tractable asymptotic distribution. Simulation results and an empirical illustration are provided to highlight the practicality and usefulness of the estimator. |
Date: | 2010–02–25 |
URL: | http://d.repec.org/n?u=RePEc:esx:essedp:686&r=ecm |
By: | Marcus J Chambers |
Abstract: | This paper reports the results of an extensive investigation into the use of the jackknife as a method of estimation in stationary autoregressive models. In addition to providing some general theoretical results concerning jackknife methods it is shown that a method based on the use of non-overlapping sub-intervals is found to work particularly well and is capable of reducing bias and root mean squared error (RMSE) compared to ordinary least squares (OLS), subject to a suitable choice of the number of sub-samples, rules-of-thumb for which are provided. The jackknife estimators also outperform OLS when the distribution of the disturbances departs from normality and when it is subject to autoregressive conditional heteroskedasticity. Furthermore the jackknife estimators are much closer to being median-unbiased than their OLS counterparts. |
Date: | 2010–02–04 |
URL: | http://d.repec.org/n?u=RePEc:esx:essedp:684&r=ecm |
By: | Grendar, Marian (Department of mathematics, FPV UMB, Tajovského 40, 974 01 Banská Bystrica, Slovakia. Institute of Mathematics and CS, Slovak Academy of Sciences (SAS) and UMB, Banská Bystrica. Institute of Measurement Sciences SAS, Bratislava, Slovakia.); Judge, George G. (University of California, Berkeley. Dept of agricultural and resource economics) |
Abstract: | Methods, like Maximum Empirical Likelihood (MEL), that operate within the Empirical Estimating Equations (E3) approach to estimation and inference are challenged by the Empty Set Problem (ESP). We propose to return from E3 back to the Estimating Equations, and to use the Maximum Likelihood method. In the discrete case the Maximum Likelihood with Estimating Equations (MLEE) method avoids ESP. In the continuous case, how to make ML-EE operational is an open question. Instead of it, we propose a Patched Empirical Likelihood, and demonstrate that it avoids ESP. The methods enjoy, in general, the same asymptotic properties as MEL. |
Keywords: | estimation theory, econometrics |
Date: | 2010–01 |
URL: | http://d.repec.org/n?u=RePEc:are:cudare:1094&r=ecm |
By: | Thomas Barrios; Rebecca Diamond; Guido W. Imbens; Michal Kolesar |
Abstract: | It is standard practice in empirical work to allow for clustering in the error covariance matrix if the explanatory variables of interest vary at a more aggregate level than the units of observation. Often, however, the structure of the error covariance matrix is more complex, with correlations varying in magnitude within clusters, and not vanishing between clusters. Here we explore the implications of such correlations for the actual and estimated precision of least squares estimators. We show that with equal sized clusters, if the covariate of interest is randomly assigned at the cluster level, only accounting for non-zero covariances at the cluster level, and ignoring correlations between clusters, leads to valid standard errors and confidence intervals. However, in many cases this may not suffice. For example, state policies exhibit substantial spatial correlations. As a result, ignoring spatial correlations in outcomes beyond that accounted for by the clustering at the state level, may well bias standard errors. We illustrate our findings using the 5% public use census data. Based on these results we recommend researchers assess the extent of spatial correlations in explanatory variables beyond state level clustering, and if such correlations are present, take into account spatial correlations beyond the clustering correlations typically accounted for. |
JEL: | C01 C1 C31 |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:15760&r=ecm |
By: | Klarl, Torben |
Abstract: | The aim of this paper is to introduce a new model selection mechanism for cross sectional spatial models. This method is more flexible than the approach proposed by Florax et al. (2003) since it controls for spatial dependence as well as for spatial heterogeneity. In particular, Bayesian and Maximum-Likelihood (ML) estimation methods are employed for model selection. Furthermore, higher order spatial influence is considered. The proposed method is then used to identify knowledge spillovers from German NUTS-2 regional data. One key result of the study is that spatial heterogeneity matters. Thus, robust estimation can be achieved by controlling for both phenomena. -- |
Keywords: | Spatial econometrics,Bayesian spatial econometrics,Spatial heterogeneity |
JEL: | C11 C31 C52 |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:zbw:zewdip:10005&r=ecm |
By: | Eva Dettmann; Claudia Becker; Christian Schmeißer |
Abstract: | The study contributes to the development of ’standards’ for the application of matching algorithms in empirical evaluation studies. The focus is on the first step of the matching procedure, the choice of an appropriate distance function. Supplementary o most former studies, the simulation is strongly based on empirical evaluation ituations. This reality orientation induces the focus on small samples. Furthermore, ariables with different scale levels must be considered explicitly in the matching rocess. The choice of the analysed distance functions is determined by the results of former theoretical studies and recommendations in the empirical literature. Thus, in the simulation, two balancing scores (the propensity score and the index score) and the Mahalanobis distance are considered. Additionally, aggregated statistical distance functions not yet used for empirical evaluation are included. The matching outcomes are compared using non-parametrical scale-specific tests for identical distributions of the characteristics in the treatment and the control groups. The simulation results show that, in small samples, aggregated statistical distance functions are the better choice for summarising similarities in differently scaled variables compared to the commonly used measures. |
Keywords: | distancefunctions,matching,microeconometricevaluation,propensity score,simulation |
JEL: | C14 C15 C52 |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:iwh:dispap:3-10&r=ecm |
By: | Makram El-Shagi |
Abstract: | We develop an evolutionary algorithm to estimate Threshold Vector Error Correction models (TVECM) with more than two cointegrated variables. Since disregarding a threshold in cointegration models renders standard approaches to the estimation of the cointegration vectors inefficient, TVECM necessitate a simultaneous estimation of the cointegration vector(s) and the threshold. As far as two cointegrated variables are considered this is commonly achieved by a grid search. However, grid search quickly becomes computationally unfeasible if more than two variables are cointegrated. Therefore, the likelihood function has to be maximized using heuristic approaches. Depending on the precise problem structure the evolutionary approach developed in the present paper for this purpose saves 90 to 99 per cent of the computation time of a grid search. |
Keywords: | EvolutionaryStrategy,GeneticAlgorithm,TVECM |
JEL: | C61 C32 |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:iwh:dispap:1-10&r=ecm |
By: | Artem Prokhorov (Concordia University) |
Abstract: | Several recent papers (e.g., Newey et al., 2005; Newey and Smith, 2004; Anatolyev, 2005) derive general expressions for the second-order bias of the GMM estimator and its first-order equivalents such as the EL estimator. Except for some simulation evidence, it is unknown how these compare to the second-order bias of QMLE of covariance structure models. The paper derives the QMLE bias formulas for this general class of models. The bias -- identical to the EL second-order bias under normality -- depends on the fourth moments of data and remains the same as for EL even for non-normal data so long as the condition for equal asymptotic efficiency of QMLE and GMM derived in Prokhorov (2009) is satisfied. |
Keywords: | (Q)MLE, GMM, EL, Covariance structures |
JEL: | C13 |
Date: | 2010–01 |
URL: | http://d.repec.org/n?u=RePEc:crd:wpaper:09009&r=ecm |
By: | Exterkate, P.; Dijk, D.J.C. van; Heij, C.; Groenen, P.J.F. (Erasmus Econometric Institute) |
Abstract: | Various ways of extracting macroeconomic information from a data-rich environment are compared with the objective of forecasting yield curves using the Nelson-Siegel model. Five issues in factor extraction are addressed, namely, selection of a subset of the available information, incorporation of the forecast objective in constructing factors, specification of a multivariate forecast objective, data grouping before constructing factors, and selection of the number of factors in a data-driven way. Our empirical results show that each of these features helps to improve forecast accuracy, especially for the shortest and longest maturities. The data-driven methods perform well in relatively volatile periods, when simpler models do not suffice. |
Keywords: | yield curve prediction;Nelson-Siegel model;factor extraction;variable selection |
Date: | 2010–02–23 |
URL: | http://d.repec.org/n?u=RePEc:dgr:eureir:1765018254&r=ecm |
By: | Giuseppe De Luca (ISFOL); Luigi Franco Peracchi (Tor Vergata University, EIEF) |
Abstract: | We consider a general sample selection model for a single cross-section that simultaneously takes into account selectivity due to unit and item nonresponse, endogeneity problems, and issues related to flexible specification of the relationship of interest. We estimate both parametric and semiparametric specifications of the model. The parametric specification assumes that the unobservables in the model follow a multivariate Gaussian distribution. The semiparametric specification avoids distributional assumptions about the unobservables. We apply the model to the problem of estimating the food Engel curve using data from the first wave of the Survey on Health, Aging and Retirement in Europe (SHARE). |
JEL: | B12 C14 C31 C34 |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:eie:wpaper:1004&r=ecm |
By: | Proietti, Tommaso |
Abstract: | Seasonality is one of the most important features of economic time series. The possibility to abstract from seasonality for the assessment of economic conditions is a widely debated issue. In this paper we propose a strategy for assessing the role of seasonal adjustment on business cycle measurement. In particular, we provide a method for quantifying the contribution to the unreliability of the estimated cycles extracted by popular filters, such as Baxter and King and Hodrick-Prescott. The main conclusion is that the contribution is larger around the turning points of the series and at the extremes of the sample period; moreover, it much more sizeable for highpass filters, like the Hodrick-Prescott filter, which retain to a great extent the high frequency fluctuations in a time series, the latter being the ones that are more affected by seasonal adjustment. If a bandpass component is considered, the effect has reduced size. Finally, we discuss the role of forecast extensions and the prediction of the cycle. For the time series of industrial production considered in the illustration, it is not possible to provide a reliable estimate of the cycle at the end of the sample. |
Keywords: | Linear filters; Unobserved Components; Seasonal Adjustment; Reliability. |
JEL: | E32 C22 |
Date: | 2010–02–21 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:20868&r=ecm |
By: | Sarafidis, Vasilis; Weber, Neville |
Abstract: | This paper proposes a partially heterogeneous framework for the analysis of panel data with fixed T , based on the concept of "partitional clustering". In particular, the population of cross-sectional units is grouped into clusters, such that parameter homogeneity is maintained only within clusters. To de- termine the (unknown) number of clusters we propose an information-based criterion, which, as we show, is strongly consistent - i.e. it selects the true number of clusters with probability one as N approaches infinity. Simulation experiments show that the proposed criterion performs well even with moderate N and the resulting parameter estimates are close to the true values. We apply the method in a panel data set of commercial banks in the US and we find four clusters, with significant differences in the slope parameters across clusters. |
Keywords: | Partial heterogeneity; partitional clustering; information-based criterion; model selection |
JEL: | C13 C51 C33 |
Date: | 2009–12–08 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:20814&r=ecm |
By: | Picchio, M.; Mussida, C. (Tilburg University, Center for Economic Research) |
Abstract: | Sizeable gender differences in employment rates are observed in many countries. Sample selection into the workforce might therefore be a relevant issue when estimating gender wage gaps. This paper proposes a new semi-parametric estimator of densities in the presence of covariates which incorporates sample selection. We describe a simulation algorithm to implement counterfactual comparisons of densities. The proposed methodology is used to investigate the gender wage gap in Italy. It is found that when sample selection is taken into account gender wage gap widens, especially at the bottom of the wage distribution. Explanations are offered for this empirical finding. |
Keywords: | gender wage gap;hazard function;sample selection;glass ceiling;sticky floor. |
JEL: | C21 C41 J16 J31 J71 |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:dgr:kubcen:201016&r=ecm |
By: | Hess, Wolfgang (Department of Economics, Lund University); Persson, Maria (Department of Economics, Lund University) |
Abstract: | The recent literature on the duration of trade has predominantly analyzed the determinants of trade flow durations using Cox proportional hazards models. The purpose of this paper is to show why it is inappropriate to analyze the duration of trade with continuous-time models such as the Cox model, and to propose alternative discrete-time models which are more suitable for estimation. Briefly, the Cox model has three major drawbacks when applied to large trade data sets. First, it faces problems in the presence of many tied duration times, leading to biased coefficient estimates and standard errors. Second, it is difficult to properly control for unobserved heterogeneity, which can result in spurious duration dependence and parameter bias. Third, the Cox model imposes the restrictive and empirically questionable assumption of proportional hazards. By contrast, with discrete-time models there is no problem handling ties; unobserved heterogeneity can be controlled for without difficulty; and the restrictive proportional hazards assumption can easily be bypassed. By replicating an influential study by Besedeš and Prusa from 2006, but employing discrete-time models as well as the original Cox model, we find empirical support for each of these arguments against the Cox model. Moreover, when comparing estimation results obtained from a Cox model and our preferred discrete-time specification, we find significant differences in both the predicted hazard rates and the estimated effects of explanatory variables on the hazard. In other words, the choice between models affects the conclusions that can be drawn. |
Keywords: | Duration of Trade; Continuous-Time versus Discrete-Time Hazard Models; Proportional Hazards; Unobserved Heterogeneity |
JEL: | C41 F10 F14 |
Date: | 2010–02–19 |
URL: | http://d.repec.org/n?u=RePEc:hhs:lunewp:2010_001&r=ecm |
By: | Tom Engsted (CREATES, University of Aarhus, Building 1326, DK-8000 Aarhus C); Thomas Q. Pedersen (CREATES, University of Aarhus, Building 1326, DK-8000 Aarhus C); Carsten Tanggaard (CREATES, University of Aarhus, Building 1326, DK-8000 Aarhus C) |
Abstract: | Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component in their analysis is not "cashflow news" but "inter- est rate news" which should not be zero. Consequently, in contrast to what Chen and Zhao claim, their decomposition does not serve as a valid caution against VAR based decompositions. Second, we point out that in order for VAR based decompositions to be valid, the asset price needs to be included as a state variable. In parts of Chen and Zhao's analysis the price does not appear as a state variable, thus rendering those parts of their analysis invalid. Finally, we clarify the intriguing issue of the role of the residual component in equity return decompositions. In a properly specified VAR, it makes no difference whether return news and dividend news are both computed directly or one of them is backed out as a residual. |
Keywords: | Return variance decomposition, news components, VAR model, information set, predictive variables, redundant models |
JEL: | C32 G12 |
Date: | 2010–02–01 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2010-09&r=ecm |
By: | Matteo Fragetta (University of Salerno); Giovanni Melina (Department of Economics, Mathematics & Statistics, Birkbeck) |
Abstract: | We apply graphical modelling theory to identify fiscal policy shocks in SVAR models of the US economy. Unlike other econometric approaches of which achieve identification by relying on potentially contentious a priori assumptions of graphical modelling is a data based tool. Our results are in line with Keynesian theoretical models, being also quantitatively similar to those obtained in the recent SVAR literature à la Blanchard and Perotti (2002), and contrast with neoclassical real business cycle predictions. Stability checks confirm that our findings are not driven by sample selection. |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:bbk:bbkefp:1006&r=ecm |