|
on Econometrics |
By: | Marmer, Vadim; Otsu, Taisuke |
Abstract: | This paper considers optimal testing of model comparison hypotheses for misspecified unconditional moment restriction models. We adopt the generalized Neyman-Pearson optimality criterion, which focuses on the convergence rates of the type I and II error probabilities under fixed global alternatives, and derive an optimal but practically infeasible test. We then propose feasible approximation test statistics to the optimal one. For linear instrumental variable regression models, the conventional empirical likelihood ratio test statistic emerges. For general nonlinear moment restrictions, we propose a new test statistic based on an iterative algorithm. We derive the asymptotic properties of these test statistics. |
Keywords: | Moment restriction; Model comparison; Misspecification; Generalized Neyman-Pearson optimality; Empirical likelihood; GMM |
JEL: | C12 C14 C52 |
Date: | 2008–10–01 |
URL: | http://d.repec.org/n?u=RePEc:ubc:pmicro:vadim_marmer-2008-13&r=ecm |
By: | Roberto Casarin; Domenico sartore |
Abstract: | This work deals with multivariate stochastic volatility models, which account for a time-varying variance-covariance structure of the observable variables. We focus on a special class of models recently proposed in the literature and assume that the covariance matrix is a latent variable which follows an autoregressive Wishart process. We review two alternative stochastic representations of the Wishart process and propose Markov- Switching Wishart processes to capture different regimes in the volatility level. We apply a full Bayesian inference approach, which relies upon Sequential Monte Carlo (SMC) for matrix-valued distributions and allows us to sequentially estimate both the parameters and the latent variables. |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:ubs:wpaper:0816&r=ecm |
By: | Dong Jin Lee (University of Connecticut) |
Abstract: | This paper proposes asymptotically optimal tests for unstable parameter process under the feasible circumstance that the researcher has little information about the unstable parameter process and the error distribution, and suggests conditions under which the knowledge of those processes does not provide asymptotic power gains. I first derive a test under known error distribution, which is asymptotically equivalent to LR tests for correctly identified unstable parameter processes under suitable conditions. The conditions are weak enough to cover a wide range of unstable processes such as various types of structural breaks and time varying parameter processes. The test is then extended to semiparametric models in which the underlying distribution in unknown but treated as unknown infinite dimensional nuisance parameter. The semiparametric test is adaptive in the sense that its asymptotic power function is equivalent to the power envelope under known error distribution. |
Keywords: | Adaptation, optimal test, parameter instability, semiparametric modl, semiparametric power envelope, structural break, time varying parameter |
JEL: | C12 C14 C22 |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:uct:uconnp:2008-40&r=ecm |
By: | Monica Billio; Roberto Casarin |
Abstract: | We apply sequential Monte Carlo (SMC) to the detection of turning points in the business cycle and to the evaluation of useful statistics employed in business cycle analysis. The proposed nonlinear filtering method is very useful for sequentially estimating the latent variables and the parameters of nonlinear and non-Gaussian time-series models, such as the Markov-switching (MS) models studied in this work. We show how to combine SMC with Monte Carlo Markov Chain for estimating time series models with MS latent factors. We illustrate the effectiveness of the methodology and measure, in a full Bayesian and realtime context, the ability of a pool of MS models to identify turning points in the European economic activity. We also compare our results with the business cycle datation existing in the literature and provide a sequential evaluation of the forecast accuracy of the competing MS models. |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:ubs:wpaper:0815&r=ecm |
By: | Gianni Amisano; Roberto Casarin |
Abstract: | This work deals with multivariate stochastic volatility models that account for time-varying stochastic correlation between the observable variables. We focus on the bivariate models. A contribution of the work is to introduce Beta and Gamma autoregressive processes for modelling the correlation dynamics. Another contribution f our work is to allow the parameter of the correlation process to be governed by a Markov-switching process. Finally we propose a simulation-based Bayesian approach, called regularised sequential Monte Carlo. This framework is suitable for on-line estimation and the model selection. |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:ubs:wpaper:0814&r=ecm |
By: | Yasuhiro Omori (Faculty of Economics, University of Tokyo); Koji Miyawaki (Graduate School of Economics, University of Tokyo) |
Abstract: | Tobit models are extended to allow threshold values which depend on individuals' characteristics. In such models, the parameters are subject to as many inequality constraints as the number of observations, and the maximum likelihood estimation which requires the numerical maximisation of the likelihood is often difficult to be implemented. Using a Bayesian approach, a Gibbs sampler algorithm is proposed and, further, the convergence to the posterior distribution is accelerated by introducing an additional scale transformation step. The procedure is illustrated using the simulated data, wage data and prime rate changes data. |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2008cf594&r=ecm |
By: | T M Christensen (QUT); A S Hurn (QUT); K A Lindsay (University of Glasgow) |
Abstract: | Count data in economics have traditionally been modeled by means of integer-valued autoregressive models. Consequently, the estimation of the parameters of these models and their asymptotic properties have been well documented in the literature. The models comprise a description of the survival of counts generally in terms of a binomial thinning process and an independent arrivals process usually specified in terms of a Poisson distribution. This paper extends the existing class of models to encompass situations in which counts are latent and all that is observed is the presence or absence of counts. This is a potentially important modification as many interesting economic phenomena may have a natural interpretation as a series of ‘events’ that are driven by an underlying count process which is unobserved. Arrivals of the latent counts are modeled either in terms of the Poisson distribution, where multiple counts may arrive in the sampling interval, or in terms of the Bernoulli distribution, where only one new arrival is allowed in the same sampling interval. The models with latent counts are then applied in two practical illustrations, namely, modeling volatility in financial markets as a function of unobservable ‘news’ and abnormal price spikes in electricity markets being driven by latent ‘stress’. |
Keywords: | Integer-valued autoregression, Poisson distribution, Bernoulli distribution, latent factors, maximum likelihood estimation |
JEL: | C13 C25 C32 |
Date: | 2008–09–15 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2008-24&r=ecm |
By: | Debby Lanser; Henk Kranendonk |
Abstract: | Uncertainty is an inherent attribute of any forecast. In this paper, we investigate four sources of uncertainty with CPB’s macroeconomic model SAFFIER: provisional data, exogenous variables, model parameters and residuals of behavioural equations. We apply a Monte Carlo simulation technique to calculate standard errors for the short-term and medium-term horizon for GDP and eight other macroeconomic variables. The results demonstrate that the main contribution to the total variance of a medium-term forecast, emanates from the uncertainty in the exogenous variables. For the short-term forecast both exogenous variables and provisional data are most relevant. |
Keywords: | Monte Carlo simulation; Macro economic forecasting; Model uncertainty |
JEL: | C15 C53 E20 E27 |
Date: | 2008–09 |
URL: | http://d.repec.org/n?u=RePEc:cpb:discus:112&r=ecm |
By: | Denis Conniffe (Economics, National University of Ireland, Maynooth); Donal O’Neill (Economics, National University of Ireland, Maynooth) |
Abstract: | A common approach to dealing with missing data in econometrics is to estimate the model on the common subset of data, by necessity throwing away potentially useful data. In this paper we consider a particular pattern of missing data on explanatory variables that often occurs in practice and develop a new efficient estimator for models where the dependent variable is binary. We derive exact formulae for the estimator and its asymptotic variance. Simulation results show that our estimator performs well when compared to popular alternatives, such as complete case analysis and multiple imputation. We then use our estimator to examine the portfolio allocation decision of Italian households using the Survey of Household Income and Wealth carried out by the Bank of Italy |
Keywords: | Missing Data, Probit Model, Portfolio Allocation, Risk Aversion |
JEL: | C25 G11 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:may:mayecw:n1960908.pdf&r=ecm |
By: | Sergio Chavez (School of Finance and Economics, University of Technology, Sydney); Eckhard Platen (School of Finance and Economics, University of Technology, Sydney) |
Abstract: | This paper points out that pseudo-random number generators in widely used standard software can generate severe distributional deviations from targeted distributions when used in parallel implementations. In Monte Carlo simulation of random walks for financial applications this can lead to remarkable errors. These are not reduced when increasing the sample size. The paper suggests to use instead of standard routines, combined feedback shift register methods for generating random bits in parallel that are based on particular polynomials of degree twelve. As seed numbers the use of natural random numbers is suggested. The resulting hybrid random bit generators are then suitable for parallel implementation with random walk type applications. They show better distributional properties than those typically available and can produce massive streams of random numbers in parallel, suitable for Monte Carlo simulation in finance. |
Keywords: | Pseudo-random number generators; parallel random bit generators; Monte Carlo simulation; feedback shift register method |
JEL: | G10 G13 |
Date: | 2008–07–01 |
URL: | http://d.repec.org/n?u=RePEc:uts:rpaper:228&r=ecm |
By: | Cobus Burger (Department of Economics, University of Stellenbosch) |
Abstract: | Conventional wage analyses suffers from a debilitating ailment: since there are no observable market wages for individuals who do not work, findings are limited to the sample of the population that are employed. Due to the problem of sample selection bias, using this subsample of working individuals to draw conclusions for the entire population will lead to inconsistent estimates. Remedial procedures have been developed to address this issue. Unfortunately, these models strongly rely on the assumed parametric distribution of the unobservable residuals as well as the existence of an exclusion restriction, delivering biased estimates if either of these assumptions is violated. This has given rise to a recent interest in semi-parametric estimation methods that do not make any distributional assumptions and are thus less sensitive to deviations from normality. This paper will investigate a few proposed solutions to the sample selection problem in an attempt to identify the best model of earnings for South African data. |
Keywords: | Semiparametric and nonparametric methods; Simulation methods; Truncated and censored models; Labour force and employment, Size, and structure |
JEL: | C14 C15 C34 J21 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:sza:wpaper:wpapers66&r=ecm |
By: | Richard Blundell (University College London and Institute for Fiscal Studies); Mónica Costa Dias (CETE, Faculdade de Economia - Universidade do Porto and Institute for Fiscal Studies) |
Abstract: | This paper reviews a range of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching methods, instrumental variables, discontinuity design and control functions. It discusses the identification of both the traditionally used average parameters and more complex distributional parameters. In each case, the necessary assumptions and the data requirements are considered. The adequacy of each approach is discussed drawing on the empirical evidence from the education and labor market policy evaluation literature. We also develop an education evaluation model which we use to carry through the discussion of each alternative approach. A full set of STATA datasets are provided free online which contain Monte-Carlo replications of the various specifications of the education evaluation model. There are also a full set of STATA .do files for each of the estimation approaches described in the paper. The .do-files can be used together with the datasets to reproduce all the results in the paper. |
Keywords: | Evaluation methods, policy evaluation, matching methods, instrumental variables, social experiments, natural experiments, difference-in-differences, discontinuity design, control function. |
JEL: | J21 J64 C33 |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:por:cetedp:0805&r=ecm |
By: | Raj Chetty |
Abstract: | The debate between "structural" and "reduced-form" approaches has generated substantial controversy in applied economics. This article reviews a recent literature in public economics that combines the advantages of reduced-form strategies -- transparent and credible identification -- with an important advantage of structural models -- the ability to make predictions about counterfactual outcomes and welfare. This recent work has developed formulas for the welfare consequences of various policies that are functions of high-level elasticities rather than deep primitives. These formulas provide theoretical guidance for the measurement of treatment effects using program evaluation methods. I present a general framework that shows how many policy questions can be answered by identifying a small set of sufficient statistics. I use this framework to synthesize the modern literature on taxation, social insurance, and behavioral welfare economics. Finally, I discuss topics in labor economics, industrial organization, and macroeconomics that can be tackled using the sufficient statistic approach. |
JEL: | C1 H0 J0 L0 |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:14399&r=ecm |
By: | Michael Greenacre |
Abstract: | Correspondence analysis has found extensive use in the social and environmental sciences as a method for visualizing the patterns of association in a table of frequencies. Inherent to the method is the expression of the frequencies in each row or each column relative to their respective totals, and it is these sets of relative frequencies (called profiles) that are visualized. This “relativization” of the frequencies makes perfect sense in social science applications where sample sizes vary across different demographic groups, and so the frequencies need to be expressed relative to these different bases in order to make these groups comparable. But in ecological applications sampling is usually performed on equal areas or equal volumes so that the absolute abundances of the different species are of relevance, in which case relativization is optional. In this paper we define the correspondence analysis of raw abundance data and discuss its properties, comparing these with the regular correspondence analysis based on relative abundances. |
Keywords: | Abundance data, biplot, profiles, visualization |
JEL: | C19 C88 |
Date: | 2008–09 |
URL: | http://d.repec.org/n?u=RePEc:upf:upfgen:1112&r=ecm |
By: | Xavier De Scheemaekere (Centre Emile Bernheim, Solvay Business School, Université Libre de Bruxelles, Brussels.); Ariane Szafarz (Centre Emile Bernheim, Solvay Business School, Université Libre de Bruxelles, Brussels and DULBEA, Université Libre de Bruxelles) |
Abstract: | This paper sheds a new light on the gap between a priori and a posteriori probabilities by concentrating on the evolution of the mathematical concept. It identifies the illegitimate use of Bernoulli’s law of large numbers as the probabilists’ original sin. The resulting confusion on the mathematical foundation for statistical inference was detrimental to Laplace’s definition of probability in terms of equi-possible outcomes as well as to von Mises’ frequentist approach. On the opposite, Kolmogorov’s analytical axiomatization of probability theory enables a priori and a posteriori probabilities to relate to each other without contradiction, allowing a consistent mathematical specification of the dual nature of probability. Therefore, only in Kolmorogorov’s formalism is statistical inference rigorously framed. |
Keywords: | Probability, Bernoulli’s Theorem, Mathematics, Statistics. |
JEL: | N01 B31 C65 |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:sol:wpaper:08-029&r=ecm |
By: | Clements, Michael P. (Department of Economics,University of Warwick) |
Abstract: | A comparison of the point forecasts and the central tendencies of probability distributions of inflation and output growth of the SPF indicates that the point forecasts are sometimes optimistic relative to the probability distributions. We consider and evaluate a number of possible explanations for this finding, including the degree of uncertainty concerning the future, computational costs, delayed updating, and asymmetric loss. We also consider the relative accuracy of the two sets of forecasts. |
Keywords: | Rationality ; point forecasts ; probability distributions |
JEL: | C53 E32 E37 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:wrk:warwec:870&r=ecm |
By: | Clements, Michael P. (Department of Economics,University of Warwick) |
Abstract: | We consider the possibility that respondents to the Survey of Professional Forecasters round their probability forecasts of the event that real output will decline in the future. We make various assumptions about how forecasters round their forecasts, including that individuals have constant patterns of responses across forecasts. Our primary interests are the impact of rounding on assessments of the internal consistency of the probability forecasts of a decline in real output and the histograms for annual real output growth, and on the relationship between the probability forecasts and the point forecasts of quarterly output growth. |
Keywords: | Rounding ; probability forecasts ; probability distributions |
JEL: | C53 E32 E37 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:wrk:warwec:869&r=ecm |