|
on Econometrics |
By: | Gary Koop (University of Strathclyde); Roberto Leon-Gonzalez (National Graduate Institute for Policy Studies); Rodney Strachan (The Australian National University) |
Abstract: | This paper considers the instrumental variable regression model when there is uncertainty about the set of instruments, exogeneity restrictions, the validity of identifying restrictions and the set of exogenous regressors. This uncertainty can result in a huge number of models. To avoid statistical problems associated with standard model selection procedures, we develop a reversible jump Markov chain Monte Carlo algorithm that allows us to do Bayesian model averaging. The algorithm is very flexible and can be easily adapted to analyze any of the different priors that have been proposed in the Bayesian instrumental variables literature. We show how to calculate the probability of any relevant restriction (e.g. the posterior probability that over-identifying restrictions hold) and discuss diagnostic checking using the posterior distribution of discrepancy vectors. We illustrate our methods in a returns-to-schooling application. |
Keywords: | Bayesian, endogeneity, simultaneous equations, reversible jump Markov chain Monte Carlo |
JEL: | C11 C30 |
Date: | 2011–01 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:09_11&r=ecm |
By: | Pierre-Andre Chiappori (Columbia University); Ivana Komunjer (University of California San Diego); Dennis Kristensen (Columbia University) |
Abstract: | This paper derives sufficient conditions for nonparametric transformation models to be identified and develops estimators of the identified components. Our nonparametric identification result is global, and is derived under conditions that are substantially weaker than full independence. In particular, we show that a completeness assumption combined with conditional independence with respect to one of the regressors suffices for the model to be identified. The identification result is also constructive in the sense that it yields explicit expressions of the functions of interest. We show how natural estimators can be developed from these expressions, and analyze their theoretical properties. Importantly, it is demonstrated that the proposed estimator of the unknown transformation function converges at the parametric rate. |
Keywords: | nonparametric identification; transformation models; kernel estimation |
Date: | 2011–01 |
URL: | http://d.repec.org/n?u=RePEc:kud:kuieca:2011_01&r=ecm |
By: | Claudio Morana |
Abstract: | In the paper the fractionally integrated heteroskedastic factor vec- tor autoregressive (FI-HF-VAR) model is introduced. The proposed approach is characterized by minimal pretesting requirements and sim- plicity of implementation also in very large systems, performing well independently of integration properties and sources of persistence, i.e. deterministic or stochastic, accounting for common features of di¤erent kinds, i.e. common integrated (of the fractional or inte- ger type) or non integrated stochastic factors, also featuring condi- tional heteroskedasticity, and common deterministic break processes. The proposed approach allows for accurate investigation of economic time series, from persistence and copersistence analysis to impulse responses and forecast error variance decomposition. Monte Carlo results strongly support the proposed methodology. Key words: long and short memory, structural breaks, fractionally integrated heteroskedastic factor vector autoregressive model. |
JEL: | C22 |
Date: | 2010–12 |
URL: | http://d.repec.org/n?u=RePEc:icr:wpmath:36-2010&r=ecm |
By: | Luc Bauwens; Gary Koop; Dimitris Korobilis; Jeroen Rombouts |
Abstract: | This paper compares the forecasting performance of different models which have been proposed for forecasting in the presence of structural breaks. These models differ in their treatment of the break process, the parameters defining the model which applies in each regime and the out-of-sample probability of a break occurring. In an extensive empirical evaluation involving many important macroeconomic time series, we demonstrate the presence of structural breaks and their importance for forecasting in the vast majority of cases. However, we find no single forecasting model consistently works best in the presence of structural breaks. In many cases, the formal modeling of the break process is important in achieving good forecast performance. However, there are also many cases where simple, rolling OLS forecasts perform well. <P> |
Keywords: | Forecasting, change-points, Markov switching, Bayesian inference., |
JEL: | C11 C22 C53 |
Date: | 2011–01–01 |
URL: | http://d.repec.org/n?u=RePEc:cir:cirwor:2011s-13&r=ecm |
By: | Stefan Hoderlein (Institute for Fiscal Studies and Brown); Anne Vanhems (Institute for Fiscal Studies and Toulouse Business School) |
Abstract: | <p>This paper proposes a framework to model empirically welfare effects that are associated with a price change in a population of heterogeneous consumers. Individual demands are characterized by a nonseparable model which is nonparametric in the regressors, as well as monotonic in unobserved heterogeneity. In this setup, we first provide and discuss conditions under which the heterogeneous welfare effects are identified, and establish constructive identification. We then propose a sample counterpart estimator, and analyze its large sample properties. For both identification and estimation, we distinguish between the cases when regressors are exogenous and when they are endogenous. Finally, we apply all concepts to measuring the heterogeneous effect of a chance of gasoline price using US consumer data and find very substantial differences in individual effects.</p> |
Date: | 2011–01 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:01/11&r=ecm |
By: | Candelon Bertrand; Hurlin Christophe; Tokpavi Sessi (METEOR) |
Abstract: | Shrinkage estimators of the covariance matrix are known to improve the stability over time of the Global Minimum Variance Portfolio (GMVP), as they are less error-prone. However, the improvement over the empirical covariance matrix is not optimal for small values of n, the estimation sample size. For typical asset allocation problems, with n small, this paper aims to introduce a new framework useful to improve the stability of the GMVP based on shrinkage estimators of the covariance matrix. First, we show analytically that the weights of any GMVP can be shrunk - within the framework of the ridge regression - towards the ones of the equally-weighted portfolio in order to reduce sampling error. Second, montecarlo simulations and empirical applications show that applying our methodology to the GMVP based on shrinkage estimators of the covariance matrix, leads to more stable portfolio weights, sharp decreases in portfolio turnovers, and often statistically lower (resp. higher) out-of-sample variances (resp. sharpe ratios). These results illustrate that double shrinkage estimation of the GMVP can be beneficial for realistic small estimation sample sizes. |
Keywords: | monetary economics ; |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:dgr:umamet:2011002&r=ecm |
By: | Bernard Fingleton (Department of Economics, University of Strathclyde.); Luisa Corrado (Faculty of Economics, University of Cambridge) |
Abstract: | Spatial econometrics has been criticized by some economists because some model specifications have been driven by data-analytic considerations rather than having a firm foundation in economic theory. In particular this applies to the so-called W matrix, which is integral to the structure of endogenous and exogenous spatial lags, and to spatial error processes, and which are almost the sine qua non of spatial econometrics. Moreover it has been suggested that the significance of a spatially lagged dependent variable involving W may be misleading, since it may be simply picking up the e¤ects of omitted spatially dependent variables, incorrectly suggesting the existence of a spillover mechanism. In this paper we review the theoretical and empirical rationale for network dependence and spatial externalities as embodied in spatially lagged variables, arguing that failing to acknowledge their presence at least leads to biased inference, can be a cause of inconsistent estimation, and leads to an incorrect understanding of true causal processes. |
Keywords: | Spatial econometrics, endogenous spatial lag, exogenous spatial lag, spatially dependent errors, network dependence, externalities, the W matrix, panel data with spatial effects, multilevel models with spatial effects. |
JEL: | C21 C31 R0 |
Date: | 2011–01 |
URL: | http://d.repec.org/n?u=RePEc:str:wpaper:1101&r=ecm |
By: | Smeekes Stephan (METEOR) |
Abstract: | We propose an approach to investigate the stationarity properties of individual units in a panel based on testing user-defined increasing proportions of hypothesized stationary units sequentially. Asymptotically valid critical values are obtained using the block bootstrap. This sequential approach has an advantage over multiple testing approaches, in particular if N is large and T is small, as it can exploit the cross-sectional dimension, which the multiple testing approaches cannot do effectively. A simulation study is conducted to analyze the relative performance of the approach in comparison with multiple testing approaches. The method is also illustrated by two empirical applications, in testing for unit roots in real exchange rates and log earnings data of households. The simulation study and applications demonstrate the usefulness of our method, in particular in panels with large N and small T. |
Keywords: | econometrics; |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:dgr:umamet:2011003&r=ecm |
By: | Tatsuya Kubokawa (Faculty of Economics, University of Tokyo); William E. Strawderman (Department of Statistics, Rutgers University) |
Abstract: | This paper studies minimaxity of estimators of a set of linear combinations of location parameters μi, i = 1, . . . , k under quadratic loss. When each location parameter is known to be positive, previous results about minimaxity or non-minimaxity are extended from the case of estimating a single linear combination, to estimating any number of linear combinations. Necessary and/or sufficient conditions for minimaxity of general estimators are derived. Particular attention is paid to the generalized Bayes estimator with respect to the uniform distribution and to the truncated version of the unbiased estimator (which is the maximum likelihood estimator for symmetric unimodal distributions). A necessary and sufficient condition for minimaxity of the uniform prior generalized Bayes estimator is particularly simple; If one estimates µ = A¹ where A is an ℓ × k known matrix, the estimator is minimax if and only if (AAt)ij ≤ 0 for any i and j, (i ̸= j). This condition is also sufficient (but not necessary) for minimaxity of the MLE. |
Date: | 2011–01 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2011cf786&r=ecm |
By: | Christian Kleiber; Achim Zeileis |
Abstract: | Reproducibility of economic research has attracted considerable attention in recent years. So far, the discussion has focused on reproducibility of empirical analyses. This paper addresses a further aspect of reproducibility, the reproducibility of computational experiments. We examine the current situation in econometrics and derive a set of guidelines from our findings. To illustrate how computational experiments could be conducted and reported we present an example from time series econometrics that explores the finite-sample power of certain structural change tests. |
Keywords: | computational experiment, reproducibility, simulation, software. |
Date: | 2011–02 |
URL: | http://d.repec.org/n?u=RePEc:inn:wpaper:2011-02&r=ecm |
By: | Sessi Topkavi |
Abstract: | This paper takes a minimax regression approach to incorporate aversion to parameter uncertainty into the mean-variance model. The uncertainty-averse minimax mean-variance portfolio is obtained by minimizing with respect to the unknown weights the upper bound of the usual quadratic risk function over a fuzzy ellipsoidal set. Beyond the existing approaches, our methodology offers three main advantages: first, the resulting optimal portfolio can be interpreted as a Bayesian mean-variance portfolio with the least favorable prior density, and this result allows for a comprehensive comparison with traditional uncertainty-neutral Bayesian mean-variance portfolios. Second, the minimax mean-variance portfolio has a shrinkage expression, but its performance does not necessarily lie within those of the two reference portfolios. Third, we provide closed form expressions for the standard errors of the minimax mean-variance portfolio weights and statistical significance of the optimal portfolio weights can be easily conducted. Empirical applications show that incorporating aversion to parameter uncertainty leads to more stable optimal portfolios that outperform traditional uncertainty-neutral Bayesian mean-variance portfolios. |
Keywords: | Asset allocation, estimation error, aversion to uncertainty, min-imax regression, Bayesian mean-variance portfolios, least favorable prior |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:drm:wpaper:2011-1&r=ecm |
By: | St\'ephane Chr\'etien; Juan-Pablo Ortega |
Abstract: | The estimation of multivariate GARCH time series models is a difficult task mainly due to the significant overparameterization exhibited by the problem and usually referred to as the "curse of dimensionality". For example, in the case of the VEC family, the number of parameters involved in the model grows as a polynomial of order four on the dimensionality of the problem. Moreover, these parameters are subjected to convoluted nonlinear constraints necessary to ensure, for instance, the existence of stationary solutions and the positive semidefinite character of the conditional covariance matrices used in the model design. So far, this problem has been addressed in the literature only in low dimensional cases with strong parsimony constraints. In this paper we propose a general formulation of the estimation problem in any dimension and develop a Bregman-proximal trust-region method for its solution. The Bregman-proximal approach allows us to handle the constraints in a very efficient and natural way by staying in the primal space and the Trust-Region mechanism stabilizes and speeds up the scheme. Preliminary computational experiments are presented and confirm the very good performances of the proposed approach. |
Date: | 2011–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1101.5475&r=ecm |
By: | Ivan Savin; Peter Winker |
Abstract: | Business tendency survey indicators are widely recognized as a key instrument for business cycle forecasting. Their leading indicator property is assessed with regard to forecasting industrial production in Russia and Germany. For this purpose, vector autoregressive (VAR) models are specified and estimated to construct forecasts. As the potential number of lags included is large, we compare full–specified VAR models with subset models obtained using a Genetic Algorithm enabling ’holes’ in multivariate lag structures. The problem is complicated by the fact that a structural break and seasonal variation of indicators have to be taken into account. The models allow for a comparison of the dynamic adjustment and the forecasting performance of the leading indicators for both |
Keywords: | Leading indicators, business cycle forecasts, VAR, model selection, genetic algorithms. |
Date: | 2011–01–27 |
URL: | http://d.repec.org/n?u=RePEc:com:wpaper:046&r=ecm |
By: | Carolin Strobl; Julia Kopf; Achim Zeileis |
Abstract: | Differential item functioning (DIF) can lead to an unfair advantage or disadvantage for certain subgroups in educational and psychological testing. Therefore, a variety of statistical methods has been suggested for detecting DIF in the Rasch model. Most of these methods are designed for the comparison of pre-specified focal and reference groups, such as males and females. Latent class approaches, on the other hand, allow to detect previously unknown groups exhibiting DIF. However, this approach provides no straightforward interpretation of the groups with respect to person characteristics. Here we propose a new method for DIF detection based on model-based recursive partitioning that can be considered as a compromise between those two extremes. With this approach it is possible to detect groups of subjects exhibiting DIF, which are not prespecified, but result from combinations of observed ovariates. These groups are directly interpretable and can thus help understand the psychological sources of DIF. The statistical background and construction of the new method is first introduced by means of an instructive example, and then applied to data from a general knowledge quiz and a teaching evaluation. |
Keywords: | item response theory, IRT, Rasch model, dierential item functioning, DIF, structural change, multidimensionality. |
Date: | 2011–01 |
URL: | http://d.repec.org/n?u=RePEc:inn:wpaper:2011-01&r=ecm |
By: | Gerrit Reher; Bernd Wilfling |
Abstract: | In this paper we develop a unifying Markov-switching GARCH model which enables us (1) to specify complex GARCH equations in two distinct Markov-regimes, and (2) to model GARCH equations of different functional forms across the two Markov-regimes. To give a simple example, our flexible Markov-switching approach is capable of estimating an exponential GARCH (EGARCH) specification in the first and a standard GARCH specification in the second Markov-regime. We derive a maximum likelihood estimation framework and apply our general Markov-switching GARCH model to daily excess returns of the German stock market index DAX. Our empirical study has two major findings. First, our estimation results unambiguously indicate that our general model outperforms all conventional Markov-switching GARCH models hitherto estimated in the financial literature. Second, we find significant Markov-switching in the German stock market with substantially differing volatility structures across the regimes. |
Keywords: | Markov-switching models; GARCH models; Dynamics of stock index returns |
JEL: | C5 G10 G15 |
Date: | 2011–01 |
URL: | http://d.repec.org/n?u=RePEc:cqe:wpaper:1711&r=ecm |
By: | Trifon I. Missov (Max Planck Institute for Demographic Research, Rostock, Germany); Maxim S. Finkelstein (Max Planck Institute for Demographic Research, Rostock, Germany) |
Abstract: | Statistical analysis of data on the longest living humans leaves room for speculation whether the human force of mortality is actually leveling o®. Based on this uncertainty, we study a mixture failure model, introduced by Finkelstein and Esaulova (2006) that generalizes, among others, the proportional hazards and accelerated failure time models. In this paper we, first, extend the Abelian theorem of these authors to mixing distributions, whose densities are functions of regular variation. In addition, taking into account the asymptotic behavior of the mixture hazard rate prescribed by this Abelian theorem, we prove three Tauberian-type theorems that describe the class of admissible mixing distributions. We illustrate our findings with examples of popular mixing distributions that are used to model unobserved heterogeneity. |
Keywords: | mortality |
JEL: | J1 Z0 |
Date: | 2011–01 |
URL: | http://d.repec.org/n?u=RePEc:dem:wpaper:wp-2011-004&r=ecm |
By: | Edward Herbst; Frank Schorfheide |
Abstract: | This paper develops and applies tools to assess multivariate aspects of Bayesian Dynamic Stochastic General Equilibrium (DSGE) model forecasts and their ability to predict comovements among key macroeconomic variables. The authors construct posterior predictive checks to evaluate the calibration of conditional and unconditional density forecasts, in addition to checks for root-mean-squared errors and event probabilities associated with these forecasts. The checks are implemented on a three-equation DSGE model as well as the Smets and Wouters (2007) model using real-time data. They find that the additional features incorporated into the Smets-Wouters model do not lead to a uniform improvement in the quality of density forecasts and prediction of comovements of output, inflation, and interest rates. |
Keywords: | Econometric models ; Forecasting |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedpwp:11-5&r=ecm |
By: | Rodrigo Peñaloza (Departamento de Economia (Department of Economics) Faculdade de Economia, Administração, Contabilidade e Ciência da Informação e Documentação (FACE) (Faculty of Economics, Administration, Accounting and Information Science) Universidade de Brasília) |
Abstract: | When we deal with two categorical variables, Ginis index of distributional transvariation is a most usefull tool to measure the distributional difference between them. By means of a modi ed transvariation, which we call Euclidean transvari- ation, we showed that our measure of transvariation can be decomposed into the difference of two terms: a measure of categorical separation and the average variabil- ity. This decomposition allows us to view the dissimilarities between two categorical variables through three di¤erent lenses: distribution, modality, and variability. Fi- nally, by de ning a simpler measure of statistical dependence based on Pearsons X2, we prove a relationship between statistical dependence and the transvariational impact of one variable onto another. |
Keywords: | nominal variables, transvariation, degree of dependence |
JEL: | C49 |
Date: | 2011–01 |
URL: | http://d.repec.org/n?u=RePEc:brs:wpaper:351&r=ecm |