|
on Econometrics |
By: | Donald W.K. Andrews (Cowles Foundation, Yale University); Xiaoxia Shi (Dept. of Economics, University of Wisconsin, Madison) |
Abstract: | This paper develops methods of inference for nonparametric and semiparametric parameters defined by conditional moment inequalities and/or equalities. The parameters need not be identified. Confidence sets and tests are introduced. The correct uniform asymptotic size of these procedures is established. The false coverage probabilities and power of the CS's and tests are established for fixed alternatives and some local alternatives. Finite-sample simulation results are given for a nonparametric conditional quantile model with censoring and a nonparametric conditional treatment effect model. The recommended CS/test uses a Cramer-von-Mises-type test statistic and employs a generalized moment selection critical value. |
Keywords: | Asymptotic size, Kernel, Local power, Moment inequalities, Nonparametric inference, Partial identification |
JEL: | C12 C15 |
Date: | 2011–12 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1840&r=ecm |
By: | Norets, Andriy (Department of Economics, Princeton University, Princeton, USA); Pelenis, Justinas (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria) |
Abstract: | This paper considers Bayesian nonparametric estimation of conditional densities by countable mixtures of location-scale densities with covariate dependent mixing probabilities. The mixing probabilities are modeled in two ways. First, we consider finite covariate dependent mixture models, in which the mixing probabilities are proportional to a product of a constant and a kernel and a prior on the number of mixture components is specified. Second, we consider kernel stick-breaking processes for modeling the mixing probabilities. We show that the posterior in these two models is weakly and strongly consistent for a large class of data generating processes. |
Keywords: | Bayesian nonparametrics, posterior consistency, conditional density estimation, mixtures of normal distributions, location-scale mixtures, smoothly mixing regressions, mixtures of experts, dependent Dirichlet process, kernel stick-breaking process |
JEL: | C11 C14 |
Date: | 2011–12 |
URL: | http://d.repec.org/n?u=RePEc:ihs:ihsesp:282&r=ecm |
By: | Cavaliere Giuseppe; Phillips Peter C.B.; Smeekes Stephan; Taylor A.M. Robert (METEOR) |
Abstract: | A number of recently published papers have focused on the problem of testing for a unit root inthe case where the driving shocks may be unconditionally heteroskedastic. These papers have,however, assumed that the lag length in the unit root test regression is a deterministic functionof the sample size, rather than data-determined, the latter being standard empirical practice. Inthis paper we investigate the finite sample impact of unconditional heteroskedasticity onconventional data-dependent methods of lag selection in augmented Dickey-Fuller type unit roottest regressions and propose new lag selection criteria which allow for the presence ofheteroskedasticity in the shocks. We show that standard lag selection methods show a tendency toover-fit the lag order under heteroskedasticity, which results in significant power losses in the(wild bootstrap implementation of the) augmented Dickey-Fuller tests under the alternative. Thenew lag selection criteria we propose are shown to avoid this problem yet deliver unit root testswith almost identical finite sample size and power properties as the corresponding tests based onconventional lag selection methods when the shocks are homoskedastic. |
Keywords: | econometrics; |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:dgr:umamet:2011056&r=ecm |
By: | Kleppe, Tore Selland; Liesenfeld, Roman |
Abstract: | This paper provides high-dimensional and flexible importance sampling procedures for the likelihood evaluation of dynamic latent variable models involving finite or infinite mixtures leading to possibly heavy tailed and/or multi-modal target densities. Our approach is based upon the efficient importance sampling (EIS) approach of Richard and Zhang (2007) and exploits the mixture structure of the model when constructing importance sampling distributions as mixture of distributions. The proposed mixture EIS procedures are illustrated with ML estimation of a student-t state space model for realized volatilities and a stochastic volatility model with leverage effects and jumps for asset returns. -- |
Keywords: | dynamic latent variable model,importance sampling,marginalized likelihood,mixture,Monte Carlo,realized volatility,stochastic volatility |
JEL: | C15 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:zbw:cauewp:201111&r=ecm |
By: | Huber, Martin; Mellace, Giovanni |
Abstract: | This paper proposes tests for instrument validity in sample selection models with non-randomly censored outcomes. Such models commonly invoke an exclusion restriction (i.e., the availability of an instrument affecting selection, but not the outcome) and additive separability of the errors in the selection process. These assumptions allow us to both point identify and bound the outcome distribution of the subpopulation of the always selected, whose outcomes are observed irrespective of the instrument value. As the point must lie within its bounds, this yields two testable inequality constraints. We apply our tests to two instruments conventionally exploited for the estimation of female wage equations: non-wife/husband's income and the number of (young) children. Considering eight empirical applications, our results suggest that the former is not a valid instrument, while the validity of the latter is not refuted on statistical grounds. |
Keywords: | Sample selection, instrument, test |
JEL: | C12 C15 C24 C26 |
Date: | 2011–12 |
URL: | http://d.repec.org/n?u=RePEc:usg:econwp:2011:45&r=ecm |
By: | Jerry A. Hausman; Christopher J. Palmer |
Abstract: | Since the advent of heteroskedasticity-robust standard errors, several papers have proposed adjustments to the original White formulation. We replicate earlier findings that each of these adjusted estimators performs quite poorly in finite samples. We propose a class of alternative heteroskedasticity-robust tests of linear hypotheses based on an Edgeworth expansions of the test statistic distribution. Our preferred test outperforms existing methods in both size and power for low, moderate, and severe levels of heteroskedasticity. |
JEL: | C01 C12 |
Date: | 2011–12 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:17698&r=ecm |
By: | Qian, Hang |
Abstract: | Departure from normality poses implementation barriers to the Markowitz mean-variance portfolio selection. When assets are affected by common and idiosyncratic shocks, the distribution of asset returns may exhibit Markov switching regimes and have a Gaussian mixture distribution conditional on each regime. The model is estimated in a Bayesian framework using the Gibbs sampler. An application to the global portfolio diversification is also discussed. |
Keywords: | Portfolio; Bayesian; Hidden Markov Model; Gaussian Mixture |
JEL: | G11 C11 |
Date: | 2011–12–24 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:35561&r=ecm |
By: | Valentin Zelenyuk (CEPA - School of Economics, The University of Queensland); Leopold Simar (CEPA - School of Economics, The University of Queensland) |
Abstract: | In a seminal paper, Racine and Li, (Journal of Econometrics, 2004) introduce a tool which admits discrete and categorical variables as regressors in nonparametric regres- sions. The method is similar to the smoothing techniques for continuous regressors but uses discrete kernels. In the literature, it is generally admitted that it is always better to smooth the discrete variables. In this paper we investigate the potential problem linked to the bandwidths selection for the continuous variable due to the presence of the discrete variables. We find that in some cases, the performance of the resulting regression estimates may be deteriorated by smoothing the discrete variables in the way addressed so far in the literature, and that a fully separate estimation (without any smoothing of the discrete variable) may provide significantly better results, and we explain why this may happen. The problem being posed, we then suggest how to use the Racine and Li approach to overcome these difficulties and to provide estimates with better performances. We investigate through some simulated data sets and by more ex- tensive Monte-Carlo experiments the performances of all the proposed approaches and we find that, as expected, our suggested approach has the best performances. We also briefly illustrate the consequences of these issues on the estimation of the derivatives of the regression. Finally, we exemplify the phenomenon with an empirical illustration. Our main objective is to warn the practitioners of the potential problems posed by smoothing discrete variables by using the so far available softwares and to suggest a safer approach to implement the procedure. |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:qld:uqcepa:76&r=ecm |
By: | John Geweke (University of Technology Sydney, P.O. Box 123, Broadway, NSW 2007, Australia, Erasmus University, Netherlands and University of Colorado, USA.); Gianni Amisano (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.) |
Abstract: | This paper develops a multi-way analysis of variance for non-Gaussian multivariate distributions and provides a practical simulation algorithm to estimate the corresponding components of variance. It specifically addresses variance in Bayesian predictive distributions, showing that it may be decomposed into the sum of extrinsic variance, arising from posterior uncertainty about parameters, and intrinsic variance, which would exist even if parameters were known. Depending on the application at hand, further decomposition of extrinsic or intrinsic variance (or both) may be useful. The paper shows how to produce simulation-consistent estimates of all of these components, and the method demands little additional effort or computing time beyond that already invested in the posterior simulator. It illustrates the methods using a dynamic stochastic general equilibrium model of the US economy, both before and during the global financial crisis. JEL Classification: C11, C53. |
Keywords: | Analysis of variance, Bayesian inference, predictive distributions, posterior simulation. |
Date: | 2011–12 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20111409&r=ecm |
By: | Paweł Strawiński (Faculty of Economic Sciences, University of Warsaw) |
Abstract: | Matched sampling is a methodology used to estimate treatment effects. A caliper mechanism is used to achieve better similarity among matched pairs. We investigate finite sample properties of matching with calipers and propose a slight modification to the existing mechanism. The simulation study compares the performance of both methods and shows that a standard caliper performs well only in case of constant treatment or uniform propensity score distribution. Secondly, in a case of non-uniform distribution or non-uniform treatment the dynamic caliper method outperforms standard caliper matching. |
Keywords: | propensity score matching, caliper, efficiency, Monte Carlo study, finite sample properties |
JEL: | C14 C21 C52 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:war:wpaper:2011-25&r=ecm |
By: | Pierre Perron (Department of Economics, Boston University); Tomoyoshi Yabu (Faculty of Business and Commerce, Keio University) |
Abstract: | Roy, Falk and Fuller (2004) presented a procedure aimed at providing a test for the value of the slope of a trend function that has (nearly) controlled size in autoregressive models whether the noise component is stationary or has a unit root. In this note, we document errors in both their theoretical results and the simulations they reported. Once these are corrected for, their procedure delivers a test that has very liberal size in the case with a unit root so that the stated goal is not achieved. Interestingly, the mistakes in the code used to generate the simulated results (which is the basis for the evidence about the reliability of the method) are such that what they report is essentially equivalent to the size and power of the test proposed by Perron and Yabu (2009), which was shown to have the standard Normal distribution whether the noise is stationary or has a unit root. |
Date: | 2011–10 |
URL: | http://d.repec.org/n?u=RePEc:kei:dpaper:2011-024&r=ecm |
By: | Evren Caglar; Jagjit S. Chadha; Katsuyuki Shibayama |
Abstract: | Koop, Pesaran and Smith (2011) suggest a simple diagnostic indicator for the Bayesian estimation of the parameters of a DSGE model. They show that, if a parameter is well identified, the precision of the posterior should improve as the (artificial) data size T increases, and the indicator checks the speed at which precision improves. It does not require any additional programming; a researcher just needs to generate artificial data and estimate the model with different T. Applying this to Smets and Wouters'(2007) medium size US model, we find that while exogenous shock processes are well identified, most of the parameters in the structural equations are not. |
Keywords: | Bayesian Estimation; Dynamic stochastic general equilibrium models; Identification |
JEL: | C51 C52 E32 |
Date: | 2011–11 |
URL: | http://d.repec.org/n?u=RePEc:ukc:ukcedp:1125&r=ecm |
By: | Jones, A.;; Lomas, J.;; Rice, N.; |
Abstract: | This paper extends the literature on modelling healthcare cost data by ap- plying the Generalised Beta of the Second Kind (GB2) distribution to UK data. A quasi-experimental design, estimating models on a subset of the data and evaluating performance on another subset, is used to compare this distribution with its nested and limiting cases. We nd that the GB2 may be a useful tool for choosing an appropriate distribution to apply, with the Beta-2 (B2) distribution and Generalised Gamma (GG) distribution per- forming the best with this dataset. |
Keywords: | Health econometrics; Generalised beta of the second kind; Generalised gamma; Skewed outcomes; Healthcare cost data; |
JEL: | C1 C5 |
Date: | 2011–10 |
URL: | http://d.repec.org/n?u=RePEc:yor:hectdg:11/31&r=ecm |
By: | Lof, Matthijs |
Abstract: | There is hope for the generalized method of moments (GMM). Lanne and Saikkonen (2011) show that the GMM estimator is inconsistent, when the instruments are lags of noncausal variables. This paper argues that this inconsistency depends on distributional assumptions, that do not always hold. In particular under rational expectations, the GMM estimator is found to be consistent. This result is derived in a linear context and illustrated by simulation of a nonlinear asset pricing model. |
Keywords: | generalized method of moments; noncausal autoregression; rational expectations |
JEL: | C51 C32 C22 |
Date: | 2011–12–22 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:35536&r=ecm |
By: | Dirk Tasche |
Abstract: | The estimation of probabilities of default (PDs) for low default portfolios by means of upper confidence bounds is a well established procedure in many financial institutions. However, there are often discussions within the institutions or between institutions and supervisors about which confidence level to use for the estimation. The Bayesian estimator for the PD based on the uninformed, uniform prior distribution is an obvious alternative that avoids the choice of a confidence level. In this paper, we demonstrate that in the case of independent default events the upper confidence bounds can be represented as quantiles of a Bayesian posterior distribution based on a prior that is slightly more conservative than the uninformed prior. We then describe how to implement the uninformed and conservative Bayesian estimators in the dependent one- and multi-period default data cases and compare their estimates to the upper confidence bound estimates. The comparison leads us to suggest a constrained version of the uninformed (neutral) Bayesian estimator as an alternative to the upper confidence bound estimators. |
Date: | 2011–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1112.5550&r=ecm |
By: | Jean-Thomas Bernard; Michael Gavin; Lynda Khalaf; Marcel Voia |
Abstract: | We consider an empirical estimation of the Environmental Kuznets Curve (EKC) for carbon dioxide and sulphur, with a focus on confidence set estimation of the tipping point. Various econometric – parametric and nonparametric – methods are considered, reflecting the implications of persistence, endogeneity, the necessity of breaking down our panel regionally, and the small number of countries within each panel. In particular, we propose an inference method that corrects for potential weak-identification of the tipping point. Weak identification may occur if the true EKC is linear while a quadratic income term is nevertheless imposed into the estimated equation. Relevant literature to date confirms that non-linearity of the EKC is indeed not granted, which provides the motivation for our work. Viewed collectively, our results confirm an inverted U-shaped EKC in the OECD countries but generally not elsewhere, although a local-pollutant analysis suggest favorable exceptions beyond the OECD. Our measures of uncertainty confirm that it is difficult to identify economically plausible tipping points. Policy-relevant estimates of the tipping point can nevertheless be recovered from a local-pollutant long-run or non-parametric perspective. |
Keywords: | Environmental Kuznets Curve, Fieller method, Delta method, CO2 and SO2 emissions, Confidence set, Tipping point, Climate policy |
JEL: | C52 Q51 Q52 Q56 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:lvl:creacr:2011-4&r=ecm |
By: | Travaglini, Guido |
Abstract: | A comparison between Principal Component Analysis (PCA) and Factor Analysis (FA) is performed both theoretically and empirically for a random matrix X:(n x p) , where n is the number of observations and both coordinates may be very large. The comparison surveys the asymptotic properties of the factor scores, of the singular values and of all other elements involved, as well as the characteristics of the methods utilized for detecting the true dimension of X. In particular, the norms of the FA scores, whichever their number, and the norms of their covariance matrix are shown to be always smaller and to decay faster as n goes to infinity. This causes the FA scores, when utilized as regressors and/or instruments, to produce more efficient slope estimators in instrumental variable estimation. Moreover, as compared to PCA, the FA scores and factors exhibit a higher degree of consistency because the difference between the estimated and their true counterparts is smaller, and so is also the corresponding variance. Finally, FA usually selects a much less encumbering number of scores than PCA, greatly facilitating the search and identification of the common components of X. |
Keywords: | Principal Components; Factor Analysis; Matrix Norm |
JEL: | C52 C02 C01 |
Date: | 2011–10–13 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:35486&r=ecm |
By: | Westerlund Joakim; Urbain Jean-Pierre (METEOR) |
Abstract: | In spite of the increased use of factor-augmented regressions in recent years, little is knownregarding the relative merits of the two main approaches to estimation and inference, namely, thecross-sectional average and principal components estimators. As a response to this, the currentpaper offers an in-dept theoretical analysis of the issue. |
Keywords: | econometrics; |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:dgr:umamet:2011053&r=ecm |
By: | Harun Mirza; Lidia Storjohann |
Abstract: | The problem of weak identification has recently attracted attention in the analysis of structural macroeconomic models. Using robust methods can result in large confidence sets making inference difficult. We overcome this problem in the analysis of a forward-looking Taylor rule by seeking stronger instruments. We suggest exploiting information from a large macroeconomic data set by generating factors and using them as additional instruments. This approach results in a stronger instrument set and hence smaller weak-identification robust confidence sets. It allows us to conclude that there has been a shift in monetary policy from the pre-Volcker regime to the Volcker-Greenspan tenure. |
Keywords: | Taylor Rule, Weak Instruments, Factor Models |
JEL: | E31 E52 C22 |
Date: | 2011–12 |
URL: | http://d.repec.org/n?u=RePEc:bon:bonedp:bgse13_2012&r=ecm |
By: | Cruces, Guillermo; Lanjouw, Peter; Lucchetti, Leonardo; Perova, Elizaveta; Vakis, Renos; Viollaz, Mariana |
Abstract: | This paper validates a recently proposed method to estimate intra-generational mobility through repeated cross-sectional surveys. The technique allows the creation of a"synthetic panel"-- done by predicting future or past household income using a set of simple modeling and error structure assumptions -- and thus permits the estimation of lower and upper bounds on directional mobility measures. The authors validate the approach in three different settings where good panel data also exist (Chile, Nicaragua, and Peru). In doing so, they also carry out a number of refinements to the validation procedure. The results are broadly encouraging: the methodology performs well in all three settings, especially in cases where richer model specifications can be estimated. The technique does equally well in predicting short and long-term mobility patterns and is robust to a broad set of additional"stress"and sensitivity tests. Overall, the paper lends support to the application of this approach to settings where panel data are absent. |
Keywords: | Services&Transfers to Poor,Poverty Reduction Strategies,Scientific Research&Science Parks,Science Education,Housing&Human Habitats |
Date: | 2011–12–01 |
URL: | http://d.repec.org/n?u=RePEc:wbk:wbrwps:5916&r=ecm |
By: | Luca Regis |
Abstract: | We present a full Bayesian model for assessing the reserve requirement of multiline Non-Life insurance companies. Bayesian models for claims reserving allow to account for expert knowledge in the evaluation of Outstanding Loss Liabilities, allowing the use of additional information at a low cost. This paper combines a standard Bayesian approach for the estimation of marginal distribution for the single Lines of Business for a Non-Life insurance company and a Bayesian copula procedure for the estimation of aggregate reserves. The model we present allows to "mix" own-assessments of dependence between LoBs at a company level and market-wide estimates provided by regulators. We illustrate results for the single lines of business and we compare standard copula aggregation for different copula choices and the Bayesian copula approach. |
Keywords: | stochastic claims reserving; bayesian copulas; solvency capital requirement; loss reserving; bayesian methods |
JEL: | C11 G22 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:cca:wpaper:227&r=ecm |
By: | Cooke, Roger M. (Resources for the Future); Nieboer, Daan |
Date: | 2011–03–15 |
URL: | http://d.repec.org/n?u=RePEc:rff:dpaper:dp-11-19&r=ecm |
By: | Miguel Artiach (Dpto. Fundamentos del Análisis Económico) |
Abstract: | Second-order moments, as even functions in time, are conventionally regarded as containing no information about the time irreversible nature of a sequence and therefore about its frequency asymmetry. However, this paper shows that the frequency asymmetry produces a clearly distinct behaviour in second-order moments that can be observed in both the time domain and the frequency domain. In addition, a frequency domain method of estimation of the differing lengths of the recessionary and expansionary stages of a cycle is proposed and its finite sample performance evaluated. Finally, the asymmetric patterns in the waves of the US unemployment rate and in the sunspot index are analysed. |
Keywords: | frequency asymmetry, time irreversibility, periodogram, correlogram, business cycle |
JEL: | C13 C22 E27 |
Date: | 2011–12 |
URL: | http://d.repec.org/n?u=RePEc:ivi:wpasad:2011-27&r=ecm |
By: | Cristin Buescu; Michael Taksar; Fatoumata J. Kon\'e |
Abstract: | We use the expectation of the range of an arithmetic Brownian motion and the method of moments on the daily high, low, opening and closing prices to estimate the volatility of the stock price. The daily price jump at the opening is considered to be the result of the unobserved evolution of an after-hours virtual trading day.The annualized volatility is used to calculate Black-Scholes prices for European options, and a trading strategy is devised to profit when these prices differ flagrantly from the market prices. |
Date: | 2011–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1112.4534&r=ecm |