|
on Econometrics |
By: | Istvan Barra (VU University Amsterdam); Lennart Hoogerheide (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); Andre Lucas (VU University Amsterdam) |
Abstract: | We propose a new methodology for the Bayesian analysis of nonlinear non-Gaussian state space models with a Gaussian time-varying signal, where the signal is a function of a possibly high-dimensional state vector. The novelty of our approach is the development of proposal densities for the joint posterior density of parameter and state vectors: a mixture of Student's t-densities as the marginal proposal density for the parameter vector, and a Gaussian density as the conditional proposal density for the signal given the parameter vector. We argue that a highly efficient procedure emerges when these proposal densities are used in an independent Metropolis-Hastings algorithm. A particular feature of our approach is that smoothed estimates of the states and an estimate of the marginal likelihood are obtained directly as an output of the algorithm. Our methods are computationally efficient and produce more accurate estimates when compared to recently proposed alternativ es. We present extensive simulation evidence for stochastic volatility and stochastic intensity models. For our empirical study, we analyse the performance of our method for stock return data and corporate default panel data. |
Keywords: | nonlinear non-Gaussian state space model; Bayesian inference; Monte Carlo estimation; Metropolis-Hastings algorithm; mixture of Student's t-distributions |
JEL: | C11 C15 C22 C32 C58 |
Date: | 2012–03–26 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20130050&r=ecm |
By: | Fosgerau, Mogens; Mabit, Stefan |
Abstract: | We propose a method to generate flexible mixture distributions that are useful for estimating models such as the mixed logit model using simulation. The method is easy to implement, yet it can approximate essentially any mixture distribution. We test it with good results in a simulation study and on real data. |
Keywords: | Mixture distributions; mixed logit; simulation; maximum simulated likelihood |
JEL: | C14 C15 C25 |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:46078&r=ecm |
By: | Karavias, Yiannis; Tzavalis, Elias |
Abstract: | The asymptotic local power of least squares based fixed-T panel unit root tests allowing for a structural break in their individual effects and/or incidental trends of the AR(1) panel data model is studied. These tests correct the least squares estimator of the autoregressive coefficient of this panel data model for its inconsistency due to the individual effects and/or incidental trends of the panel. The limiting distributions of the tests are analytically derived under a sequence of local alternatives, assuming that the cross-sectional dimension of the tests (N) grows large. It is shown that the considered fixed-T tests have local power which tends to unity fast only if the panel data model includes individual effects. For panel data models with incidental trends, the power of the tests becomes trivial. However, this problem does not always appear if the tests allow for serial correlation of the error term. |
Keywords: | Panel data, unit root tests, structural breaks, local power, serial correlation, incidental trends |
JEL: | C22 C23 |
Date: | 2013–04–09 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:46012&r=ecm |
By: | Wintenberger, Olivier |
Abstract: | We introduce the notion of continuous invertibility on a compact set for volatility models driven by a Stochastic Recurrence Equation (SRE). We prove the strong consistency of the Quasi Maximum Likelihood Estimator (QMLE) when the optimization procedure is done on a continuously invertible domain. This approach gives for the first time the strong consistency of the QMLE used by Nelson (1991) for the EGARCH(1,1) model under explicit but non observable conditions. In practice, we propose to stabilize the QMLE by constraining the optimization procedure to an empirical continuously invertible domain. The new method, called Stable QMLE (SQMLE), is strongly consistent when the observations follow an invertible EGARCH(1,1) model. We also give the asymptotic normality of the SQMLE under additional minimal assumptions. |
Keywords: | Invertible models, volatility models, quasi maximum likelihood, strong consistency, asymptotic normality, exponential GARCH, stochastic recurrence equation. |
JEL: | C13 C22 |
Date: | 2013–01–07 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:46027&r=ecm |
By: | Canova, Fabio; Ferroni, Filippo; Matthes, Christian |
Abstract: | We propose two methods to choose the variables to be used in the estimation of the structural parameters of a singular DSGE model. The first selects the vector of observables that optimizes parameter identification; the second the vector that minimizes the informational discrepancy between the singular and non-singular model. An application to a standard model is discussed and the estimation properties of different setups compared. Practical suggestions for applied researchers are provided. |
Keywords: | ABCD representation; Density ratio; DSGE models.; Identification |
JEL: | C10 E27 E32 |
Date: | 2013–03 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:9381&r=ecm |
By: | Ana-Maria Dumitru (University of Surrey) |
Abstract: | We propose new bootstrap methods for the Barndorff-Nielsen and Shephard (2006) and Andersen et al. (2012) tests for jumps, as well as for the realized bipower variation and the median realized variation.1 Both the i.i.d. and the Wild bootstrap are considered. We prove CLT-type results for the couples: realized volatility-realized bipower variation and realized volatility-median realized variation. Based on these results, we build boot- strapped tests for jumps. We introduce a new jump-testing procedure that uses Fisher (1932)’s method to average p-values from one/ different tests applied at different sampling frequencies. The procedure is proven to be more efficient than applying the asymptotic tests, as we discard less data and extract information from multiple frequencies and/ or procedures. We use a double bootstrap procedure to control the overall size of the test. |
Date: | 2013–01 |
URL: | http://d.repec.org/n?u=RePEc:sur:surrec:0113&r=ecm |
By: | Le, Vo Phuong Mai; Minford, Patrick; Wickens, Michael R. |
Abstract: | We propose a numerical method, based on indirect inference, for checking the identification of a DSGE model. Monte Carlo samples are generated from the model's true structural parameters and a VAR approximation to the reduced form estimated for each sample. We then search for a different set of structural parameters that could potentially also generate these VAR parameters. If we can find such a set, the model is not identified. |
Keywords: | DSGE Model; Indirect Inference; Monte Carlo |
JEL: | C13 C51 C52 E32 |
Date: | 2013–03 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:9411&r=ecm |
By: | Francisco Blasques (VU University Amsterdam) |
Keywords: | nonparametric; phase-dependence; time-varying correlation |
JEL: | C01 C14 C32 |
Date: | 2013–04–04 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20130054&r=ecm |
By: | Chen, Songxi |
Abstract: | We propose two tests for the equality of covariance matrices between two high-dimensional populations. One test is on the whole variance-covariance matrices, and the other is on offdiagonal sub-matrices which define the covariance between two non-overlapping segments of the high-dimensional random vectors. The tests are applicable (i) when the data dimension is much larger than the sample sizes, namely the “large p, small n” situations and (ii) without assuming parametric distributions for the two populations. These two aspects surpass the capability of the conventional likelihood ratio test. The proposed tests can be used to test on covariances associated with gene ontology terms. |
Keywords: | High dimensional covariance; Large p small n; Likelihood ratio test; Testing for Gene-sets. |
JEL: | C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 G0 G1 G2 G3 |
Date: | 2012–05–04 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:46026&r=ecm |
By: | Torben G. Andersen; Dobrislav Dobrev; Ernst Schaumburg |
Abstract: | We provide a first in-depth look at robust estimation of integrated quarticity (IQ) based on high frequency data. IQ is the key ingredient enabling inference about volatility and the presence of jumps in financial time series and is thus of considerable interest in applications. We document the significant empirical challenges for IQ estimation posed by commonly encountered data imperfections and set forth three complementary approaches for improving IQ based inference. First, we show that many common deviations from the jump diffusive null can be dealt with by a novel filtering scheme that generalizes truncation of individual returns to truncation of arbitrary functionals on return blocks. Second, we propose a new family of efficient robust neighborhood truncation (RNT) estimators for integrated power variation based on order statistics of a set of unbiased local power variation estimators on a block of returns. Third, we find that ratio-based inference, originally proposed in this context by Barndorff-Nielsen and Shephard (2002), has desirable robustness properties in the face of regularly occurring data imperfections and thus is well suited for empirical applications. We confirm that the proposed filtering scheme and the RNT estimators perform well in our extensive simulation designs and in an application to the individual Dow Jones 30 stocks. |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedgif:1078&r=ecm |
By: | Marc Hallin; Davy Paindaveine; Thomas Verdebout |
Keywords: | common principal components; elliptical densities; uniform local Asymptotic normality; principal components; ranks; R-estimation; robustness |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/142830&r=ecm |
By: | Xuguang Sheng (American University); Jingyun Yang (Pennsylvania State University) |
Abstract: | This paper proposes three new panel unit root tests based on Zaykin et al. (2002)’s truncated product method. The first one assumes constant correlation between p-values and the latter two use sieve bootstrap that allows for general forms of cross-section dependence in the panel units. Monte Carlo simulation shows that these tests have reasonably good size, are robust to varying degrees of cross-section dependence and are powerful in cases where there are some very large p-values. The proposed tests are applied to a panel of real GDP and inflation density forecasts and provide evidence that professional forecasters may not update their forecast precision in an optimal Bayesian way. |
Keywords: | Density Forecast, Panel Unit Root, P-value, Sieve Bootstrap, Truncated Product Method |
JEL: | C12 C33 |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:gwc:wpaper:2013-004&r=ecm |
By: | Aguirregabiria, Victor; Magesan, Arvind |
Abstract: | We derive marginal conditions of optimality (i.e., Euler equations) for a general class of Dynamic Discrete Choice (DDC) structural models. These conditions can be used to estimate structural parameters in these models without having to solve for or approximate value functions. This result extends to discrete choice models the GMM-Euler equation approach proposed by Hansen and Singleton (1982) for the estimation of dynamic continuous decision models. We first show that DDC models can be represented as models of continuous choice where the decision variable is a vector of choice probabilities. We then prove that the marginal conditions of optimality and the envelope conditions required to construct Euler equations are also satisfied in DDC models. The GMM estimation of these Euler equations avoids the curse of dimensionality associated to the computation of value functions and the explicit integration over the space of state variables. We present an empirical application and compare estimates using the GMM-Euler equations method with those from maximum likelihood and two-step methods. |
Keywords: | Dynamic discrete choice structural models; Euler equations; Choice probabilities. |
JEL: | C13 C35 C51 C61 |
Date: | 2013–04–10 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:46056&r=ecm |
By: | Marc Hallin; Yves-Caoimhin Swan; Thomas Verdebout |
Keywords: | asymptotic relative efficiency; rank-based tests; Wilcoxon test; van der waerden test; Spearman autocorrelations; Kendall autocorrelatins; linear serial rank statistics |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/142825&r=ecm |
By: | Sylvia Kaufmann (Study Center Gerzensee); Christian Schumacher (Deutsche Bundesbank) |
Abstract: | The analysis of large panel data sets (with N variables) involves methods of dimension reduction and optimal information extraction. Dimension reduction is usually achieved by extracting the common variation in the data into few factors (k, where k << N). In the present project, factors are estimated within a state space framework. To obtain a parsimonious representation, the N × k factor loading matrix is estimated under a sparse prior, which assumes that either many zeros may be present in each column of the matrix, or many rows may contain zeros. The significant factor loadings in columns define the variables driven by specific factors and offer an explicit interpretation of the factors. Zeros in rows indicate irrelevant variables which do not add much information to the inference. The contribution includes a new way of identification which is independent of variable ordering and which is based on semi-orthogonal loadings. |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:szg:worpap:1304&r=ecm |
By: | Matthew Smith (Federal Reserve Board) |
Abstract: | We propose a novel combination of algorithms for jointly estimating parameters and unobservable states in a nonlinear state space system. We exploit an approximation to the marginal likelihood to guide a Particle Marginal Metropolis-Hastings algorithm. While this algorithm seemingly targets reduced dimension marginal distributions, it draws from a joint distribution of much higher dimension. The algorithm is demonstrated on a stochastic volatility model and a Real Business Cycle model with robust preferences. |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:red:sed012:494&r=ecm |
By: | Kagerer, Kathrin |
Abstract: | Splines are an attractive way of flexibly modeling a regression curve since their basis functions can be included like ordinary covariates in regression settings. An overview of least squares regression using splines is presented including many graphical illustrations and comprehensive examples. Starting from two bases that are widely used for constructing splines, three different variants of splines are discussed: simple regression splines, penalized splines and smoothing splines. Further, restrictions such as monotonicity constraints are considered. The presented spline variants are illustrated and compared in a bivariate and a multivariate example with well-known data sets. A brief computational guide for practitioners using the open-source software R is given. |
Keywords: | B-spline; truncated power basis; derivative; monotonicity; penalty; smoothing spline; R |
JEL: | C14 C51 |
Date: | 2013–03–28 |
URL: | http://d.repec.org/n?u=RePEc:bay:rdwiwi:27968&r=ecm |
By: | Shinya Sugawara (Japan Society of Promotion of Science and Graduate School of Economics, University of Tokyo); Yasuhiro Omori (Faculty of Economics, University of Tokyo) |
Abstract: | This paper proposes a simple microeconometric framework that can separately identify moral hazard and selection problems in insurance markets. Our econometric model is equivalent to the approach that is utilized for entry game analyses. We employ a Bayesian estimation approach that avoids a partial identification problem. Due to the standard identification, we propose a statistical model selection method to detect an information structure that consumers face. Our method is applied to the dental insurance market in the United States. In this market, we find not only standard moral hazard but also advantageous selection, which has an intuitive interpretation in the context of dental insurance. |
Date: | 2013–03 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2013cf882&r=ecm |
By: | Kajal Lahiri; Liu Yang |
Abstract: | We propose serial correlation robust asymptotic confidence bands for the receiver operating characteristic (ROC) curves estimated by quasi-maximum likelihood in the binormal model. Our simulation experiments confirm that this new method performs fairly well in finite samples. The conventional procedure is found to be markedly undersized in terms of yielding empirical coverage probabilities lower than the nominal level, especially when the serial correlation is strong. We evaluate the three-quarter-ahead probability forecasts for real GDP declines from the Survey of Professional Forecasters, and find that one would draw a misleading conclusion about forecasting skill if serial correlation is ignored. |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:nya:albaec:13-07&r=ecm |
By: | Ben Edwards (Australian Institute of Family Studies); Mario Fiorini (Economics Discipline Group, University of Technology, Sydney); Katrien Stevens (School of Economics, University of Sydney); Matthew Taylor (Australian Institute of Family Studies) |
Abstract: | Whenever treatment effects are heterogeneous and there is sorting into treatment based on the gain, monotonicity is a condition that both Instrumental Variable and fuzzy Regression Discontinuity designs have to satisfy for their estimand to be interpretable as a LATE. Angrist and Imbens (1995) argue that the monotonicity assumption is testable whenever the treatment is multivalued. We show that their test is informative if counterfactuals are observed. Yet applying the test without observing counterfactuals, as it is generally done, is not. Nevertheless, we argue that monotonicity can and should be investigated using a mix of economic intuition and data patterns, just like other untestable assumptions in an IV or RD design. We provide examples in a variety of settings as a guide to practice. |
Date: | 2013–04–01 |
URL: | http://d.repec.org/n?u=RePEc:uts:ecowps:7&r=ecm |
By: | Géraldine Henningsen (DTU Management Engineering, Technical University of Denmark); Arne Henningsen (Department of Food and Resource Economics, University of Copenhagen); Uwe Jensen (Institute for Statistics and Econometrics, University of Kiel) |
Abstract: | In the estimation of multiple output technologies in a primal approach, the main question is how to handle the multiple outputs. Often an output distance function is used, where the classical approach is to exploit its homogeneity property by selecting one output quantity as the dependent variable, dividing all other output quantities by the selected output quantity, and using these ratios as regressors (OD). Another approach is the stochastic ray production frontier (SR) which transforms the output quantities into their Euclidean distance as the dependent variable and their polar coordinates as directional components as regressors. A number of studies have compared these specifications using real world data and have found significant differences in the inefficiency estimates. However, in order to get to the bottom of these differences, we apply a Monte-Carlo simulation. We test the robustness of both specifications for the case of a Translog output distance function with respect to different common statistical problems as well as problems arising as a consequence of zero values in the output quantities. Although, our results partly show clear reactions to statistical misspecifications, on average none of the approaches is superior. However, considerable differences are found between the estimates at single replications. In the case of zero values in the output quantities, the SR clearly outperforms the OD, although this advantage nearly vanishes when zeros are replaced by a small number. |
Keywords: | Multiple Outputs, SFA, Monte Carlo Simulation, Stochastic Ray Production Frontier, Output Distance Function |
JEL: | C21 C40 D24 |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:foi:wpaper:2013_7&r=ecm |
By: | Pettenuzzo, Davide; Timmermann, Allan G; Valkanov, Rossen |
Abstract: | We propose a new approach to imposing economic constraints on time-series forecasts of the equity premium. Economic constraints are used to modify the posterior distribution of the parameters of the predictive return regression in a way that better allows the model to learn from the data. We consider two types of constraints: Non-negative equity premia and bounds on the conditional Sharpe ratio, the latter of which incorporates timevarying volatility in the predictive regression framework. Empirically, we find that economic constraints systematically reduce uncertainty about model parameters, reduce the risk of selecting a poor forecasting model, and improve both statistical and economic measures of out-of-sample forecast performance. The Sharpe ratio constraint, in particular, results in considerable economic gains. |
Keywords: | Bayesian analysis; Economic constraints; Sharpe Ratio; Stock return predictability |
JEL: | C11 C22 G11 G12 |
Date: | 2013–03 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:9377&r=ecm |
By: | Frédéric Karamé (EPEE-TEPP (Université d’Evry-Val-d’Essonne and FR n°3126, CNRS), DYNARE Team (CEPREMAP), Centre d’Etudes de l’Emploi) |
Abstract: | We transpose the Generalized Impulse-Response Function (GIRF) developed by Koop et al. (1996) to Markov-Switching structural VARs. As the algorithm displays an exponentially increasing complexity as regards the prediction horizon, we use the collapsing technique to easily obtain simulated trajectories (shocked or not), even for the most general representations. Our approach encompasses the existing IRFs proposed in the literature and is illustrated with an applied example on gross job flows. |
Keywords: | structural VAR, Markov-switching regime, generalized impulse-response function |
JEL: | C32 C52 C53 |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:eve:wpaper:12-04&r=ecm |
By: | Canova, Fabio; Ciccarelli, Matteo |
Abstract: | This paper provides an overview of the panel VAR models used in macroeconomics and finance. It discusses what are their distinctive features, what they are used for, and how they can be derived from economic theory. It also describes how they are estimated and how shock identification is performed, and compares panel VARs to other approaches used in the literature to deal with dynamic models involving heterogeneous units. Finally, it shows how structural time variation can be dealt with and illustrates the challenges that they present to researchers interested in studying cross-unit dynamics interdependences in heterogeneous setups. |
Keywords: | Bayesian methods; dynamic models; Panel vector autoregression |
JEL: | C5 E3 |
Date: | 2013–03 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:9380&r=ecm |
By: | Barhoumi, K.; Darné, O.; Ferrara, L. |
Abstract: | For few years, the increasing size of available economic and financial databases has led econometricians to develop and adapt new methods in order to efficiently summarize information contained in those large datasets. Among those methods, dynamic factor models have known a rapid development and a large success among macroeconomists. In this paper, we carry out a review of the recent literature on dynamic factor models. First we present the models used, then the parameter estimation methods and finally the statistical tests available to choose the number of factors. In the last section, we focus on recent empirical applications, especially dealing with the building of economic outlook indicators, macroeconomic forecasting and macroeconomic and monetary policy analyses. |
Keywords: | Dynamic factor models, estimation, tests for the number of factors, macroeconomic applications. |
JEL: | C13 C51 C32 E66 F44 |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:bfr:banfra:430&r=ecm |
By: | Roberto Casarin (University Ca' Foscari of Venice and GRETA); Stefano Grassi (CREATES, Aarhus University); Francesco Ravazzolo (Norges Bank, and BI Norwegian Business School); Herman K. van Dijk (Erasmus University Rotterdam, and VU University Amsterdam) |
Abstract: | This paper presents the Matlab package DeCo (Density Combination) which is based on the paper by Billio et al. (2013) where a constructive Bayesian approach is presented for combining predictive densities originating from different models or other sources of information. The combination weights are time-varying and may depend on past predictive forecasting performances and other learning mechanisms. The core algorithm is the function DeCo which applies banks of parallel Sequential Monte Carlo algorithms to filter the time-varying combination weights. The DeCo procedure has been implemented both for standard CPU computing and for Graphical Process Unit (GPU) parallel computing. For the GPU implementation we use the Matlab parallel computing toolbox and show how to use General Purposes GPU computing almost effortless. This GPU implementation comes with a speed up of the execution time up to seventy times compared to a standard CPU Matlab implementation on a multicore CPU. We show the use of the package and the computational gain of the GPU version, through some simulation experiments and empirical applications. |
Keywords: | Density Forecast Combination; Sequential Monte Carlo; Parallel Computing; GPU; Matlab |
JEL: | C11 C15 C53 E37 |
Date: | 2013–04–09 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20130055&r=ecm |
By: | Marek Jarocinski; Albert Marcet |
Abstract: | Standard practice in Bayesian VARs is to formulate priors on the autoregres- sive parameters, but economists and policy makers actually have priors about the behavior of observable variables. We show how this kind of prior can be used in a VAR under strict probability theory principles. We state the inverse problem to be solved and we propose a numerical algorithm that works well in practical situations with a very large number of parameters. We prove various convergence theorems for the algorithm. As an application, we first show that the results in Christiano et al. (1999) are very sensitive to the introduction of various priors that are widely used. These priors turn out to be associated with undesirable priors on observables. But an empirical prior on observables helps clarify the relevance of these estimates: we find much higher persistence of out- put responses to monetary policy shocks than the one reported in Christiano et al. (1999) and a significantly larger total effect. |
Keywords: | Vector Autoregression, Bayesian Estimation, Prior about Observables, Inverse Problem, Monetary Policy Shocks |
JEL: | C11 C22 C32 |
Date: | 2013–03–18 |
URL: | http://d.repec.org/n?u=RePEc:aub:autbar:929.13&r=ecm |