|
on Econometrics |
By: | B. Jungbacker (VU University Amsterdam); S.J. Koopman (VU University Amsterdam); M. van der Wel (Erasmus University Rotterdam, and CREATES) |
Abstract: | We develop a new model representation for high-dimensional dynamic multi-factor models. It allows the Kalman filter and related smoothing methods to produce optimal estimates in a computationally efficient way in the presence of missing data. We discuss the model in detail together with the implementation of methods for signal extraction and parameter estimation. The computational gains of the new devices are presented based on simulated data-sets with varying numbers of missing entries. |
Keywords: | High-dimensional vector series; Kalman Filter; Maximum likelihood |
JEL: | C33 C43 |
Date: | 2009–02–12 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20090010&r=ecm |
By: | David Peel; Ivan Paya; E Pavlidis |
Abstract: | The specification of Smooth Transition Regression models con- sists of a sequence of tests, which are typically based on the assump- tion of i.i.d. errors. In this paper we examine the impact of condi- tional heteroskedasticity and investigate the performance of several heteroskedasticity robust versions. Simulation evidence indicates that conventional tests can frequently result in finding spurious nonlinear- ity. Conversely, when the true process is nonlinear in mean the tests appear to have low size adjusted power and can lead to the selection of misspecified models. The above deficiencies also hold for tests based on Heteroskedasticity Consistent Covariance Matrix Estimators but not for the Fixed Design Wild Bootstrap. We highlight the impor- tance of robust inference through empirical applications. |
Keywords: | Time Series, Robust Linearity Test, Heteroskedasticity Consistent Covariance Matrix Estimator, Wild Bootstrap, Monte Carlo Simulation |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:lan:wpaper:005913&r=ecm |
By: | Jacques Huguenin; Florian Pelgrin; Alberto Holly |
Abstract: | In this paper, we develop a new numerical method to estimate a multivariate probit model. To this end, we derive a new decomposition of normal multivariate integrals that has two appealing properties. First, the decomposition may be written as the sum of normal multivariate integrals, in which the highest dimension of the integrands is reduced relative to the initial problem. Second, the domains of integration are bounded and delimited by the correlation coefficients. Application of a Gauss-Legendre quadrature rule to the exact likelihood function of lower dimension allows for a major reduction of computing time while simultaneously obtaining consistent and efficient estimates for both the slope and the scale parameters. A Monte Carlo study shows that the finite sample and asymptotic properties of our method compare extremely favorably to the maximum simulated likelihood estimator in terms of both bias and root mean squared error. |
Keywords: | Multivariate Probit Model, Simulated and Full Information Maximum Likelihood, Multivariate Normal Distribution, Simulations |
JEL: | C1 C3 |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:hem:wpaper:0902&r=ecm |
By: | Imbens, Guido W. (Harvard University); Kalyanaraman, Karthik (Harvard University) |
Abstract: | We investigate the problem of optimal choice of the smoothing parameter (bandwidth) for the regression discontinuity estimator. We focus on estimation by local linear regression, which was shown to be rate optimal (Porter, 2003). Investigation of an expected-squared-error-loss criterion reveals the need for regularization. We propose an optimal, data dependent, bandwidth choice rule. We illustrate the proposed bandwidth choice using data previously analyzed by Lee (2008), as well as in a simulation study based on this data set. The simulations suggest that the proposed rule performs well. |
Keywords: | optimal bandwidth selection, local linear regression, regression discontinuity designs |
JEL: | C14 |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp3995&r=ecm |
By: | Markku Lanne; Helmut Luetkepohl; Katarzyna Maciejowska |
Abstract: | It is argued that in structural vector autoregressive (SVAR) analysis a Markov regime switching (MS) property can be exploited to identify shocks if the reduced form error covariance matrix varies across regimes. The model setup is formulated and discussed and it is shown how it can be used to test restrictions which are just-identifying in a standard structural vector autoregressive analysis. The approach is illustrated by two SVAR examples which have been reported in the literature and which have features which can be accommodated by the MS structure. |
Keywords: | Cointegration, Markov regime switching model, vector error correction model, structural vector autoregression, mixed normal distribution |
JEL: | C32 |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2009/06&r=ecm |
By: | Vladimir Kuzin; Massimiliano Marcellino; Christian Schumacher |
Abstract: | This paper discusses pooling versus model selection for now- and forecasting in the presence of model uncertainty with large, unbalanced datasets. Empirically, unbalanced data is pervasive in economics and typically due to di¤erent sampling frequencies and publication delays. Two model classes suited in this context are factor models based on large datasets and mixed-data sampling (MIDAS) regressions with few predictors. The specification of these models requires several choices related to, amongst others, the factor estimation method and the number of factors, lag length and indicator selection. Thus, there are many sources of mis-specification when selecting a particular model, and an alternative could be pooling over a large set of models with di¤erent specifications. We evaluate the relative performance of pooling and model selection for now- and forecasting quarterly German GDP, a key macroeconomic indicator for the largest country in the euro area, with a large set of about one hundred monthly indicators. Our empirical findings provide strong support for pooling over many speci.cations rather than selecting a specific model. |
Keywords: | nowcasting, forecast combination, forecast pooling, model selection, mixed-frequency data, factor models, MIDAS |
JEL: | E37 C53 |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2009/13&r=ecm |
By: | Consolo, Agostino; Favero, Carlo A; Paccagnini, Alessina |
Abstract: | Dynamic Stochastic General Equilibrium (DSGE) models are now considered attractive by the profession not only from the theoretical perspective but also from an empirical standpoint. As a consequence of this development, methods for diagnosing the fit of these models are being proposed and implemented. In this article we illustrate how the concept of statistical identification, that was introduced and used by Spanos(1990) to criticize traditional evaluation methods of Cowles Commission models, could be relevant for DSGE models. We conclude that the recently proposed model evaluation method, based on the DSGE-VAR(ë), might not satisfy the condition for statistical identification. However, our application also shows that the adoption of a FAVAR as a statistically identified benchmark leaves unaltered the support of the data for the DSGE model and that a DSGE-FAVAR can be an optimal forecasting model. |
Keywords: | Bayesian analysis; Dynamic stochastic general equilibrium model; Factor-Augmented Vector Autoregression; Model evaluation |
JEL: | C11 C52 |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:7176&r=ecm |
By: | Jordà, Òscar; Marcellino, Massimiliano |
Abstract: | A path forecast refers to the sequence of forecasts 1 to H periods into the future. A summary of the range of possible paths the predicted variable may follow for a given confidence level requires construction of simultaneous confidence regions that adjust for any covariance between the elements of the path forecast. This paper shows how to construct such regions with the joint predictive density and Scheffé's (1953) S-method. In addition, the joint predictive density can be used to construct simple statistics to evaluate the local internal consistency of a forecasting exercise of a system of variables. Monte Carlo simulations demonstrate that these simultaneous confidence regions provide approximately correct coverage in situations where traditional error bands, based on the collection of marginal predictive densities for each horizon, are vastly off mark. The paper showcases these methods with an application to the most recent monetary episode of interest rate hikes in the U.S. macroeconomy. |
Keywords: | error bands.; path forecast; simultaneous confidence region |
JEL: | C32 C52 C53 |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:7009&r=ecm |
By: | Ackerberg, Daniel; Devereux, Paul J. |
Abstract: | We introduce two simple new variants of the Jackknife Instrumental Variables (JIVE) estimator for overidentified linear models and show that they are superior to the existing JIVE estimator, significantly improving on its small sample bias properties. We also compare our new estimators to existing Nagar (1959) type estimators. We show that, in models with heteroskedasticity, our estimators have superior properties to both the Nagar estimator and the related B2SLS estimator suggested in Donald and Newey (2001). These theoretical results are verified in a set of Monte-Carlo experiments and then applied to estimating the returns to schooling using actual data. |
Keywords: | JIVE; weak instruments |
JEL: | L24 L40 O31 O34 |
Date: | 2008–08 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:6926&r=ecm |
By: | Alberto Abadie; Guido Imbens |
Abstract: | Matching estimators are widely used in statistical data analysis. However, the distribution of matching estimators has been derived only for particular cases (Abadie and Imbens, 2006). This article establishes a martingale representation for matching estimators. This representation allows the use of martingale limit theorems to derive the asymptotic distribution of matching estimators. As an illustration of the applicability of the theory, we derive the asymptotic distribution of a matching estimator when matching is carried out without replacement, a result previously unavailable in the literature. |
JEL: | C13 C14 C21 |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:14756&r=ecm |
By: | Rangan Gupta (Department of Economics, University of Pretoria); Alain Kabundi (Department of Economics and Econometrics, University of Johannesburg) |
Abstract: | This paper analyzes the ability of principal component regressions and Bayesian regression methods under Gaussian and double-exponential prior in forecasting the real house price of the United States (US), based on a monthly dataset of 112 macroeconomic variables. Using an in-sample period of 1992:01 to 2000:12, Bayesian regressions are used to forecast real US house prices at the twelve-months-ahead forecast horizon over the out-of-sample period of 2001:01 to 2004:10. In terms of the Mean Square Forecast Errors (MSFEs), our results indicate that a principal component regression with only one factor is best-suited for forecasting the real US house price. Amongst the Bayesian models, the regression based on the double exponential prior outperforms the model with Gaussian assumptions. |
Keywords: | Bayesian Regressions, Principal Components, Large-Cross Sections |
JEL: | C11 C13 C33 C53 |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:pre:wpaper:200907&r=ecm |
By: | Devereux, Paul J.; Tripathi, Gautam |
Abstract: | We develop a simple semiparametric framework for combining censored and uncensored samples so that the resulting estimators are consistent, asymptotically normal, and use all information optimally. No nonparametric smoothing is required to implement our estimators. To illustrate our results in an empirical setting, we show how to estimate the effect of changes in compulsory schooling laws on age at first marriage, a variable that is censored for younger individuals. We find positive effects of the laws on age at first marriage but the effects are much smaller than would be inferred if one ignored the censoring problem. Results from a small simulation experiment suggest that the estimator proposed in this paper can work very well in finite samples. |
Keywords: | age at first marriage; censored data; compulsory schooling |
JEL: | C34 J12 |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:6990&r=ecm |
By: | Meenagh, David; Minford, Patrick; Wickens, Michael R |
Abstract: | We use the method of indirect inference, using the bootstrap, to test the Smets and Wouters model of the EU against a VAR auxiliary equation describing their data; the test is based on the Wald statistic. We find that their model generates excessive variance compared with the data. But their model passes the Wald test easily if the errors have the properties assumed by SW but scaled down. We compare a New Classical version of the model which also passes the test easily if error properties are chosen using New Classical priors (notably excluding shocks to preferences). Both versions have (different) difficulties fitting the data if the actual error properties are used. |
Keywords: | Bootstrap; DSGE Model; Indirect inference; Model of EU; VAR model; Wald statistic |
JEL: | C12 C32 |
Date: | 2008–06 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:6838&r=ecm |
By: | Miguel A. Delgado; Carlos Velasco |
Abstract: | The construction of asymptotically distribution free time series model specification tests using as statistics the estimated residual autocorrelations is considered from a general view point. We focus our attention on Box-Pierce type tests based on the sum of squares of a few estimated residual autocorrelations. This type of tests belong to the class defined by quadratic forms of weighted residual autocorrelations, where weights are suitably transformed resulting in asymptotically distribution free tests. The weights can be optimally chosen to maximize the power function when testing in the direction of local alternatives. The optimal test in this class against MA, AR or Bloomfield alternatives is a Box-Pierce type test based on the sum of squares of a few transformed residual autocorrelations. Such transformations are, in fact, the recursive residuals in the projection of the residual autocorrelations on a certain score function. |
Keywords: | Dynamic regression model, Optimal tests, Recursive residuals, Residual autocorrelation function, Specification tests, Time series models |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:cte:werepe:we090904&r=ecm |
By: | Atiq-ur-Rehman, Atiq-ur-Rehman; Zaman, Asad |
Abstract: | In this paper we highlight the necessity of new criteria for evaluation of performance of unit root tests. We suggest focusing directly on the reasons that create ambiguity in unit root test’s results. Two reasons for unsatisfactory properties of unit root tests can be found in the literature (i) the model misspecification and (ii) observational equivalence. Regarding first reason, there is immense literature on several components of model specification covering specification techniques, consequence of misspecification and robust methods. However complete model specification involves multiple decisions and most of studies on performance of unit root tests do not address issue of multiple specification decisions simultaneously. The Monte Carlo studies are conditional on some of implicit specification and for Monte Carlo; these specifications are by construction valid. But for real data, the implicit decisions are often not true and specification decisions need to be endogenized. A closer match with real case is possible if multiple specification decisions are endogenized, thus providing more reliable measure of performance of unit root tests. Second problem in differentiating trend and difference stationary process is the observational equivalence between two processes. We suggest exploring data generating processes with different long run dynamics and small sample equivalence so that a researcher should have an idea about other plausible models for a data set for which he has estimated some model. |
Keywords: | Observational equivalence; model specification; trend stationary; difference stationary |
JEL: | C15 C22 C01 |
Date: | 2008–07 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:13489&r=ecm |
By: | Frale, Cecilia; Marcellino, Massimiliano; Mazzi, Gian Luigi; Proietti, Tommaso |
Abstract: | A continuous monitoring of the evolution of the economy is fundamental for the decisions of public and private decision makers. This paper proposes a new monthly indicator of the euro area real Gross Domestic Product (GDP), with several original features. First, it considers both the output side (six branches of the NACE classification) and the expenditure side (the main GDP components) and combines the two estimates with optimal weights reflecting their relative precision. Second, the indicator is based on information at both the monthly and quarterly level, modelled with a dynamic factor specification cast in state-space form. Third, since estimation of the multivariate dynamic factor model can be numerically complex, computational efficiency is achieved by implementing univariate filtering and smoothing procedures. Finally, special attention is paid to chain-linking and its implications, via a multistep procedure that exploits the additivity of the volume measures expressed at the prices of the previous year. |
Keywords: | Chain-linking; Dynamic factor Models; euro area GDP; Kalman filter and smoother; Multivariate State Space Models; Temporal Disaggregation |
JEL: | C53 E32 E37 |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:7007&r=ecm |
By: | Jean-Marie Dufour; Lynda Khalaf; Maral Kichian |
Abstract: | Using identification-robust methods, the authors estimate and evaluate for Canada and the United States various classes of inflation equations based on generalized structural Calvo-type models. The models allow for different forms of frictions and vary in their assumptions regarding the type of price indexation adopted by firms. Point and confidence-set parameter estimates are obtained based on the inversion of identification-robust test statistics. Focus is maintained on the structural aspect of the model with formal imposition of the restrictions that map the theoretical model into the econometric one. The results show that there is some statistical merit to using indexation-based Calvo-type models for inflation. However, some identification difficulties are also uncovered with considerable uncertainty associated with estimated parameter values. In particular, we find that implausibly-high frequency of price re-optimization values cannot be ruled out from our identification-robust confidence sets. |
Keywords: | Inflation and prices; Econometric and statistical methods |
JEL: | C13 C52 E31 |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:bca:bocawp:09-7&r=ecm |
By: | Chih-nan Chen (Research Analyst, Center for Multicultural Mental Health Research, Harvard University (cchen@chareresearch.org)); Tsutomu Watanabe (Institute of Economic Research and Research Center for Price Dynamics, Hitotsubashi University (E-mail: tsutomu.w@srv.cc.hit-u.ac.jp)); Tomoyoshi Yabu (Assistant Graduate School of Systems and Information Engineering, University of Tsukuba, and Institute for Monetary and Economic Studies, Bank of Japan (E-mail: tyabu@sk.tsukuba.ac.jp)) |
Abstract: | The monetary authorities react even to intraday changes in the exchange rate; however, in most cases, intervention data is available only at a daily frequency. This temporal aggregation makes it difficult to identify the effects of interventions on the exchange rate. We propose a new method based on Markov Chain Monte Carlo simulations to cope with this endogeneity problem: We use "data augmentation" to obtain intraday intervention amounts and then estimate the efficacy of interventions using the augmented data. Applying this method to Japanese data, we find that an intervention of one trillion yen moves the yen/dollar rate by 1.7 percent, which is more than twice as large as the magnitude reported in previous studies applying OLS to daily observations. This shows the quantitative importance of the endogeneity problem due to temporal aggregation. |
Keywords: | Foreign exchange intervention, Intraday data, Markov-chain Monte Carlo method, Endogeneity problem, Temporal aggregation |
JEL: | C11 C22 F31 F37 |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:ime:imedps:09-e-06&r=ecm |
By: | Busso, Matias (Inter-American Development Bank); DiNardo, John (University of Michigan); McCrary, Justin (University of California, Berkeley) |
Abstract: | Currently available asymptotic results in the literature suggest that matching estimators have higher variance than reweighting estimators. The extant literature comparing the finite sample properties of matching to specific reweighting estimators, however, has concluded that reweighting performs far worse than even the simplest matching estimator. We resolve this puzzle. We show that the findings from the finite sample analyses are not inconsistent with asymptotic analysis, but are very specific to particular choices regarding the implementation of reweighting, and fail to generalize to settings likely to be encountered in actual empirical practice. In the DGPs studied here, reweighting typically outperforms propensity score matching. |
Keywords: | treatment effects, propensity score, semiparametric efficiency |
JEL: | C14 C21 C52 |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp3998&r=ecm |
By: | Atiq-ur-Rehman, Atiq-ur-Rehman; Zaman, Asad |
Abstract: | We study the test for location parameter of a random number from Cauchy density, focusing on point optimal tests. We develop analytical technique to compute critical values and power curve of a point optimal test. We study the power properties of various point optimal tests. The problem turned out to be different in its nature, in that, the critical value of a test determines the power properties of test. We found that if for given size and any point m in alternative space, if the critical value of a point optimal test is 1, the test optimal for that point is the most stringent test. |
Keywords: | Cauchy density; Power Envelop; Location Parameter; Stringent Test |
JEL: | A23 |
Date: | 2008–03 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:13492&r=ecm |
By: | Juan Carlos Escanciano, Javier Hualde (Indiana University Bloomington, Universidad de Navarra) |
Abstract: | The purpose of the present paper is to relate two important concepts of time series analysis, namely, nonlinearity and persistence. Traditional mea- sures of persistence are based on correlations or periodograms, which may be inappropriate under nonlinearity and/or non-Gaussianity. This article proves that nonlinear persistence can be characterized by cumulative measures of de- pendence. The new cumulative measures are nonparametric, simple to estimate and do not require the use of any smoothing user-chosen parameters. In addi- tion, we propose nonparametric estimates of our measures and establish their limiting properties. Finally, we employ our measures to analyze the nonlin- ear persistence properties of some international stock market indices, where we ?nd an ubiquitous nonlinear persistence in conditional variance that is not accounted for by popular parametric models or by classical linear measures of persistence. This ?nding has important economic implications in, e.g., asset pricing and hedging. Conditional variance persistence in bull and bear markets is also analyzed and compared. |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:inu:caeprp:2009-003&r=ecm |
By: | Blass, Asher; Lach, Saul; Manski, Charles |
Abstract: | When data on actual choices are not available, researchers studying preferences sometimes pose choice scenarios and ask respondents to state the actions they would choose if they were to face these scenarios. The data on stated choices are then used to estimate random utility models, as if they are data on actual choices. Stated choices may differ from actual ones because researchers typically provide respondents with less information than they would have facing actual choice problems. Elicitation of choice probabilities overcomes this problem by permitting respondents to express uncertainty about their behavior. This paper shows how to use elicited choice probabilities to estimate random utility models with random coefficients and applies the methodology to estimate preferences for electricity reliability in Israel. |
Keywords: | Choice probabilities; stated choices; WTP for electricity reliability |
JEL: | C2 C25 C42 D12 L51 L94 |
Date: | 2008–11 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:7030&r=ecm |
By: | David S. Lee (Princeton University and NBER); Thomas Lemieux (University of British Columbia and NBER) |
Abstract: | This paper provides an introduction and "user guide" to Regression Discontinuity (RD) designs for empirical researchers. It presents the basic theory behind the research design, details when RD is likely to be valid or invalid given economic incentives, explains why it is considered a "quasi-experimental" design, and summarizes different ways (with their advantages and disadvantages) of estimating RD designs and the limitations of interpreting these estimates. Concepts are discussed using using examples drawn from the growing body of empirical research using RD. |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:pri:indrel:1118&r=ecm |
By: | Bonnet, Céline; Dubois, Pierre |
Abstract: | A methodology is presented allowing manufacturers and retailers vertical contracting in their pricing strategies on a differentiated product market to be introduced. This contribution allows price-cost margins to be recovered from estimates of demand parameters both under linear pricing models and two part tariffs. Two types of nonlinear pricing relationships, one where resale price maintenance is used with two part tariffs contracts and one where no resale price maintenance is allowed in two part tariff contracts in particular are considered. The methodology then allows different hypotheses on contracting and pricing relationships between manufacturers and retailers in the supermarket industry to be tested using exogenous variables supposed to shift the marginal costs of production and distribution. This method is applied empirically to study the retail market bottled water in France. Our empirical evidence shows that manufacturers and retailers use nonlinear pricing contracts and in particular two part tariff contracts with resale price maintenance. Finally, using the estimation of our structural model, some simulations of counterfactual policy experiments are introduced. |
Keywords: | collusion; competition; differentiated products; double marginalization; manufacturers; non nested tests; retailers; two part tariffs; vertical contracts; water |
JEL: | C12 C33 L13 L81 |
Date: | 2008–07 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:6918&r=ecm |
By: | Artem Prokhorov (Concordia University); Peter Schmidt (Michigan State University) |
Date: | 2009–01–12 |
URL: | http://d.repec.org/n?u=RePEc:crd:wpaper:09002&r=ecm |