
on Econometrics 
By:  D.S. Poskitt; Gael M. Martin; Simone D. Grose 
Abstract:  This paper investigates the use of bootstrapbased bias correction of semiparametric estimators of the long memory parameter in fractionally integrated processes. The resampling method involves the application of the sieve bootstrap to data prefiltered by a preliminary semiparametric estimate of the long memory parameter. Theoretical justification for using the bootstrap techniques to bias adjust log periodogram and semiparametric local Whittle estimators of the memory parameter is provided. Simulation evidence comparing the performance of the bootstrap bias correction with analytical bias correction techniques is also presented. The bootstrap method is shown to produce notable bias reductions, in particular when applied to an estimator for which analytical adjustments have already been used. The empirical coverage of confidence intervals based on the biasadjusted estimators is very close to the nominal, for a reasonably large sample size, more so than for the comparable analytically adjusted estimators. The precision of inferences (as measured by interval length) is also greater when the bootstrap is used to bias correct rather than analytical adjustments. 
Keywords:  Analytical bias correction, bootstrap bias correction, confidence interval, coverage, precision, log periodogram estimator, local Whittle estimator. 
JEL:  C18 C22 C52 
Date:  2012–04 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20128&r=ecm 
By:  Federico Belotti (Faculty of Economics, University of Rome "Tor Vergata"); Giuseppe Ilardi (Economic and Financial Statistics Department, Bank of Italy.) 
Abstract:  The classic stochastic frontier panel data models provide no mechanism to disentangle individual time invariant unobserved heterogeneity from inefficiency. Greene (2005a,b) proposed a fixedeffects model specification that distinguishes these two latent components and allows a time varying inefficiency distribution. However, the maximum likelihood estimator proposed by Greene leads to biased inefficiency estimates due to the incidental parameters problem. In this paper, we propose two alternative estimation procedures that, by relying on a first difference data transformation, achieve consistency for n goes to infinity with fixed T. Evidence from Monte Carlo simulations shows good finite sample performances of both approaches even in presence of small samples. 
Keywords:  Stochastic frontiers, Fixedeffects, Panel data, Marginal simulated likelihood, Pairwise differencing 
JEL:  C13 C16 C23 
Date:  2012–04–18 
URL:  http://d.repec.org/n?u=RePEc:rtv:ceisrp:231&r=ecm 
By:  Mark Podolskij (Heidelberg University and CREATES); Katrin Wasmuth (Heidelberg University) 
Abstract:  This paper presents a goodnessoffit test for the volatility function of a SDE driven by a Gaussian process with stationary and centered increments. Under rather weak assumptions on the Gaussian process, we provide a procedure for testing whether the unknown volatility function lies in a given linear functional space or not. This testing problem is highly nontrivial, because the volatility function is not identifiable in our model. The underlying fractional diffusion is assumed to be observed at high frequency on a fixed time interval and the test statistic is based on weighted power variations. Our test statistic is consistent against any fixed alternative. 
Keywords:  central limit theorem, goodnessoffit tests, high frequency observations, fractional diffusions, stable convergence. 
JEL:  C10 C13 C14 
Date:  2012–04–16 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201213&r=ecm 
By:  Lennart Hoogerheide (VU University Amsterdam); Anne Opschoor (Erasmus University Amsterdam); Herman K. van Dijk (Erasmus University Rotterdam, and VU University Amsterdam) 
Abstract:  A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit nonelliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student<I>t</I> densities that approximates accurately the target distribution  typically a posterior distribution, of which we only require a kernel  in the sense that the KullbackLeibler divergence between target and mixture is minimized. We label this approach <I>Mixture of t by Importance Sampling and Expectation Maximization</I> (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the MetropolisHastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a <I>sequential</I> manner. Second, we introduce a <I>permutationaugmented</I> MitISEM approach. Third, we propose a <I>partial</I> MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model. 
Keywords:  mixture of Studentt distributions; importance sampling; KullbackLeibler divergence; Expectation Maximization; MetropolisHastings algorithm; predictive likelihood; DCC GARCH; mixture GARCH; instrumental variables 
JEL:  C11 C22 C26 
Date:  2012–03–23 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20120026&r=ecm 
By:  D.S. Poskitt; Simone D. Grose; Gael M. Martin 
Abstract:  This paper investigates the accuracy of bootstrapbased inference in the case of long memory fractionally integrated processes. The resampling method is based on the semiparametric sieve approach, whereby the dynamics in the process used to produce the bootstrap draws are captured by an autoregressive approximation. Application of the sieve method to data prefiltered by a semiparametric estimate of the long memory parameter is also explored. Higherorder improvements yielded by both forms of resampling are demonstrated using Edgeworth expansions for a broad class of linear statistics. The methods are then applied to the problem of estimating the sampling distribution of the sample mean under long memory, in an experimental setting. The prefiltered version of the bootstrap is shown to avoid the distinct underestimation of the sampling variance of the mean which the raw sieve method demonstrates in finite samples, higher order accuracy of the latter notwithstanding. 
Keywords:  Bias, bootstrapbased inference, Edgeworth expansion, prefiltered sieve bootstrap, sampling distribution. 
JEL:  C18 C22 C52 
Date:  2012–04 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20129&r=ecm 
By:  Niels Haldrup (Aarhus University and CREATES); Robinson Kruse (Leibniz Universität Hannover and CREATES); Timo Teräsvirta (Aarhus University and CREATES); Rasmus T. Varneskov (Aarhus University and CREATES) 
Abstract:  One of the most infl?uential research ?fields in econometrics over the past decades concerns unit root testing in economic time series. In macroeconomics much of the interest in the area originate from the fact that when unit roots are present, then shocks to the time series processes have a persistent effect with resulting policy implications. From a statistical perspective on the other hand, the presence of unit roots has dramatic implications for econometric model building, estimation, and inference in order to avoid the socalled spurious regression problem. The present paper provides a selective review of contributions to the fi?eld of unit root testing over the past three decades. We discuss the nature of stochastic and deterministic trend processes, including break processes, that are likely to affect unit root inference. A range of the most popular unit root tests are presented and their modi?cations to situations with breaks are discussed. We also review some results on unit root testing within the framework of nonlinear processes. 
Keywords:  Unit roots, nonlinearity, structural breaks. 
JEL:  C2 C22 
Date:  2012–04–18 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201214&r=ecm 
By:  Torben G. Andersen (Northwestern University, NBER, and CREATES); Nicola Fusari (Northwestern University); Viktor Todorov (Northwestern University) 
Abstract:  We develop a new parametric estimation procedure for option panels observed with error which relies on asymptotic approximations assuming an ever increasing set of observed option prices in the moneynessmaturity (crosssectional) dimension, but with a fixed time span. We develop consistent estimators of the parameter vector and the dynamic realization of the state vector that governs the option price dynamics. The estimators converge stably to a mixedGaussian law and we develop feasible estimators for the limiting variance. We provide semiparametric tests for the option price dynamics based on the distance between the spot volatility extracted from the options and the one obtained nonparametrically from highfrequency data on the underlying asset. We further construct new formal tests of the model t for specific regions of the volatility surface and for the stability of the riskneutral dynamics over a given period of time. A largescale Monte Carlo study indicates that the inference procedures work well for empirically realistic model specifications and sample sizes. In an empirical application to S&P 500 index options we extend the popular doublejump stochastic volatility model to allow for timevarying risk premia of extreme events, i.e., jumps, as well as a more exible relation between the risk premia and the level of risk. We show that both extensions provide a significantly improved characterization, both statistically and economically, of observed option prices. 
Keywords:  Option Pricing, Inference, Risk Premia, Jumps, Latent State Vector, Stochastic Volatility, Specification Testing, Stable Convergence. 
JEL:  C51 C52 G12 
Date:  2011–05–29 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201211&r=ecm 
By:  Mark J Jensen; John M Maheu 
Abstract:  In this paper we extend the parametric, asymmetric, stochastic volatility model (ASV), where returns are correlated with volatility, by flexibly modeling the bivariate distribution of the return and volatility innovations nonparametrically. Its novelty is in modeling the joint, conditional, returnvolatility, distribution with a infinite mixture of bivariate Normal distributions with mean zero vectors, but having unknown mixture weights and covariance matrices. This semiparametric ASV model nests stochastic volatility models whose innovations are distributed as either Normal or Studentt distributions, plus the response in volatility to unexpected return shocks is more general than the fixed asymmetric response with the ASV model. The unknown mixture parameters are modeled with a Dirichlet Process prior. This prior ensures a parsimonious, finite, posterior, mixture that bests represents the distribution of the innovations and a straightforward sampler of the conditional posteriors. We develop a Bayesian Markov chain Monte Carlo sampler to fully characterize the parametric and distributional uncertainty. Nested model comparisons and outofsample predictions with the cumulative marginallikelihoods, and onedayahead, predictive logBayes factors between the semiparametric and parametric versions of the ASV model shows the semiparametric model forecasting more accurate empirical market returns. A major reason is how volatility responds to an unexpected market movement. When the market is tranquil, expected volatility reacts to a negative (positive) price shock by rising (initially declining, but then rising when the positive shock is large). However, when the market is volatile, the degree of asymmetry and the size of the response in expected volatility is muted. In other words, when times are good, no news is good news, but when times are bad, neither good nor bad news matters with regards to volatility. 
Keywords:  Bayesian nonparametrics, cumulative Bayes factor, Dirichlet process mixture, inï¬nite mixture model, leverage eï¬€ect, marginal likelihood, MCMC, nonnormal, stochastic volatility, volatilityreturn relationship 
JEL:  C11 C14 C53 C58 
Date:  2012–04–20 
URL:  http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa453&r=ecm 
By:  Neil Shephard; Dacheng Xiu 
Abstract:  Estimating the covariance and correlation between assets using high frequency data is challenging due to market microstructure effects and Epps effects. In this paper we extend Xiu’s univariate QML approach to the multivariate case, carrying out inference as if the observations arise from an asynchronously observed vector scaled Brownian model observed with error. Under stochastic volatility the resulting QML estimator is positive semidefinite, uses all available data, is consistent and asymptotically mixed normal. The quasilikelihood is computed using a Kalman filter and optimised using a relatively simple EM algorithm which scales well with the number of assets. We derive the theoretical properties of the estimator and prove that it achieves the efficient rate of convergence. We show how to make it achieve the nonparametric efficiency bound for this problem. The estimator is also analysed using Monte Carlo methods and applied on equity data that are distinct in their levels of liquidity. 
Keywords:  EM algorithm, Kalman filter, Market microstructure noise, Nonsynchronous data, Portfolio optimisation, Quadratic variation, Quasilikelihood, Semimartingale, Volatility 
JEL:  C14 C58 D53 D81 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:oxf:wpaper:604&r=ecm 
By:  Peter Exterkate (Department of Economics and CREATES Aarhus University) 
Abstract:  Kernel ridge regression is gaining popularity as a datarich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signaltonoise ratio. Based on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using crossvalidation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely applicable, and we recommend their use instead of the popular polynomial kernels in general settings, in which no information on the datagenerating process is available. 
Keywords:  Nonlinear forecasting, shrinkage estimation, kernel methods, high dimensionality 
JEL:  C51 C53 C63 
Date:  2012–02–28 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201210&r=ecm 
By:  Michael Scholz (KarlFranzens University Graz); Jens Perch Nielsen (Cass Business School); Stefan Sperlich (Universitie de Geneve) 
Abstract:  One of the most studied questions in economics and finance is whether equity returns or premiums can be predicted by empirical models. While many authors favor the historical mean or other simple parametric methods, this article focuses on nonlinear relationships. A straightforward bootstraptest confirms that non and semiparametric techniques help to obtain better forecasts. It is demonstrated how economic theory directly guides a model in an innovative way. The inclusion of prior knowledge enables for American data a further notable improvement in the prediction of excess stock returns of 35% compared to the fully nonparametric model, as measured by the more complex validated R2 as well as using classical outofsample validation. Statistically, a bias and dimension reduction method is proposed to import more structure in the estimation process as an adequate way to circumvent the curse of dimensionality. 
Keywords:  Prediction of Stock Returns, CrossValidation, Prior Knowledge, Bias Reduction, Dimension Reduction 
JEL:  C14 C53 G17 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:grz:wpaper:201202&r=ecm 
By:  D.S. Poskitt; Wenying Yao 
Abstract:  In this article we investigate the theoretical behaviour of finite lag VAR(n) models fitted to time series that in truth come from an infinite order VAR(?) data generating mechanism. We show that overall error can be broken down into two basic components, an estimation error that stems from the difference between the parameter estimates and their population ensemble VAR(n) counterparts, and an approximation error that stems from the difference between the VAR(n) and the true VAR(?). The two sources of error are shown to be present in other performance indicators previously employed in the literature to characterize, so called, truncation effects. Our theoretical analysis indicates that the magnitude of the estimation error exceeds that of the approximation error, but experimental results based upon a prototypical real business cycle model indicate that in practice the approximation error approaches its asymptotic position far more slowly than does the estimation error, their relative orders of magnitude notwithstanding. The experimental results suggest that with sample sizes and lag lengths like those commonly employed in practice VAR(n) models are likely to exhibit serious errors of both types when attempting to replicate the dynamics of the true underlying process and that inferences based on VAR(n) models can be very untrustworthy. 
Keywords:  VAR, estimation error, approximation error, RBC model 
JEL:  C18 C32 C52 C54 E37 
Date:  2012–04–19 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201211&r=ecm 
By:  Peter Martey Addo (Centre d'Economie de la Sorbonne); Monica Billio (Ca' Foscari university  Department of Economics); Dominique Guegan (Centre d'Economie de la Sorbonne  Paris School of Economics) 
Abstract:  We provide a signal modality analysis to characterize and detect nonlinearity schemes in the US Industrial Production Index time series. The analysis is achieved by using the recently proposed "delay vector variance" (DVV) method, which examines local predictability of a signal in the phase space to detect the presence of determinism and nonlinearity in a time series. Optimal embedding parameters used in the DVV analysis are obtained via a differential entropy based method using waveletbased surrogates. A complex Morlet wavelet is employed to detect and characterize the US business cycle. A comprehensive analysis of the feasibility of this approach is provided. Our results coincide with the business cycles peaks and troughs dates published by the National Bureau of Economic Research (NBER). 
Keywords:  Nonlinearity analysis, surrogates, Delay Vector Variance (DVV) method, wavelets, business cycle, embedding parameters. 
JEL:  C14 C22 C40 E32 
Date:  2012–04 
URL:  http://d.repec.org/n?u=RePEc:mse:cesdoc:12023&r=ecm 
By:  Donald Boyd; Hamilton Lankford; Susanna Loeb; James Wyckoff 
Abstract:  Testbased accountability including valueadded assessments and experimental and quasiexperimental research in education rely on achievement tests to measure student skills and knowledge. Yet we know little regarding important properties of these tests, an important example being the extent of test measurement error and its implications for educational policy and practice. While test vendors provide estimates of splittest reliability, these measures do not account for potentially important daytoday differences in student performance. We show there is a credible, lowcost approach for estimating the total test measurement error that can be applied when one or more cohorts of students take three or more tests in the subject of interest (e.g., state assessments in three consecutive grades). Our method generalizes the testretest framework allowing for either growth or decay in knowledge and skills between tests as well as variation in the degree of measurement error across tests. The approach maintains relatively unrestrictive, testable assumptions regarding the structure of student achievement growth. Estimation only requires descriptive statistics (e.g., correlations) for the tests. When studentlevel testscore data are available, the extent and pattern of measurement error heteroskedasticity also can be estimated. Utilizing math and ELA test data from New York City, we estimate the overall extent of test measurement error is more than twice as large as that reported by the test vendor and demonstrate how using estimates of the total measurement error and the degree of heteroskedasticity along with observed scores can yield meaningful improvements in the precision of student achievement and achievementgain estimates. 
JEL:  I21 
Date:  2012–04 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:18010&r=ecm 
By:  Jennifer L. Castle; David F. Hendry; Michael P. clements 
Abstract:  We consider forecasting with factors, variables and both, modeling insample using Autometrics so all principal components and variables can be included jointly, while tackling multiple breaks by impulseindicator saturation. A forecasterror taxonomy for factor models highlights the impacts of location shifts on forecasterror biases. Forecasting US GDP over 1, 4 and 8step horizons using the dataset from Stock and Watson (2009) updated to 2011:2 shows factor models are more useful for nowcasting or shortterm forecasting, but their relative performance declines as the forecast horizon increases. Forecasts for GDP levels highlight the need for robust strategies such as intercept corrections or differencing when location shifts occur, as in the recent financial crisis. 
Keywords:  Model selection, Factor models, Forecasting, Impulseindicator saturation, Autometrics 
JEL:  C51 C22 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:oxf:wpaper:600&r=ecm 
By:  Deniz Erdemlioglu; Sébastien Laurent; Christopher J. Neely 
Abstract:  This chapter reviews the rapid advances in foreign exchange volatility modeling made in the last three decades. Academic researchers have sought to fit the three major characteristics of foreign exchange volatility: intraday periodicity, autocorrelation and discontinuities in prices. Early research modeled the autocorrelation in daily and weekly squared foreign exchange returns with ARCH/GARCH models. Increased computing power and availability of highfrequency data allowed later researchers to improve volatility and jumps estimates. Researchers also found it useful to incorporate information about periodic volatility patterns and macroeconomic announcements in their calculations. This article details these volatility and jump estimation methods, compares those methods empirically and provides some suggestions for further research. 
Keywords:  Foreign exchange ; Timeseries analysis 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2012008&r=ecm 
By:  Hanming Fang (Department of Economics, University of Pennsylvania); Xun Tang (Department of Economics, University of Pennsylvania) 
Abstract:  Bidders’ risk attitudes have important implications for sellers seeking to maximize expected revenues. In ascending auctions, auction theory predicts bid distributions in Bayesian Nash equilibrium does not convey any information about bidders' risk preference. We propose a new approach for inference of bidders’ risk attitudes when they make endogenous participation decisions. Our approach is based on the idea that bidders' risk premium  the difference between ex ante expected profits from entry and the certainty equivalent  required for entry into the auction is strictly positive if and only if bidders are risk averse. We show bidders' expected profits from entry into auctions is nonparametrically recoverable, if a researcher observes the distribution of transaction prices, bidders' entry decisions and some noisy measures of entry costs. We propose a nonparametric test which attains the correct level asymptotically under the null of riskneutrality, and is consistent under fixed alternatives. We provide Monte Carlo evidence of the finite sample performance of the test. We also establish identification of risk attitudes in more general auction models, where in the entry stage bidders receive signals that are correlated with private values to be drawn in the bidding stage. 
Keywords:  Ascending auctions, Risk attitudes, Endogenous entry, Nonparametric Test, Bootstrap 
JEL:  D44 C12 C14 
Date:  2011–05–28 
URL:  http://d.repec.org/n?u=RePEc:pen:papers:12016&r=ecm 
By:  Gregor Wergen; Satya N. Majumdar; Gregory Schehr 
Abstract:  We study the statistics of the number of records R_{n,N} for N identical and independent symmetric discretetime random walks of n steps in one dimension, all starting at the origin at step 0. At each time step, each walker jumps by a random length drawn independently from a symmetric and continuous distribution. We consider two cases: (I) when the variance \sigma^2 of the jump distribution is finite and (II) when \sigma^2 is divergent as in the case of L\'evy flights with index 0 < \mu < 2. In both cases we find that the mean record number <R_{n,N}> grows universally as \sim \alpha_N \sqrt{n} for large n, but with a very different behavior of the amplitude \alpha_N for N > 1 in the two cases. We find that for large N, \alpha_N \approx 2 \sqrt{\log N} independently of \sigma^2 in case I. In contrast, in case II, the amplitude approaches to an Nindependent constant for large N, \alpha_N \approx 4/\sqrt{\pi}, independently of 0<\mu<2. For finite \sigma^2 we argue, and this is confirmed by our numerical simulations, that the full distribution of (R_{n,N}/\sqrt{n}  2 \sqrt{\log N}) \sqrt{\log N} converges to a Gumbel law as n \to \infty and N \to \infty. In case II, our numerical simulations indicate that the distribution of R_{n,N}/\sqrt{n} converges, for n \to \infty and N \to \infty, to a universal nontrivial distribution, independently of \mu. We discuss the applications of our results to the study of the record statistics of 366 daily stock prices from the Standard & Poors 500 index. 
Date:  2012–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1204.5039&r=ecm 
By:  Almut E. D. Veraart (Imperial College London and CREATES); Luitgard A. M. Veraart (London School of Economics) 
Abstract:  This paper presents a new modelling framework for day–ahead electricity prices based on multivariate Lévy semistationary (MLSS) processes. Day–ahead prices specify the prices for electricity delivered over certain time windows on the next day and are determined in a daily auction. Since there are several delivery periods per day, we use a multivariate model to describe the different day–ahead prices for the different delivery periods on the next day. We extend the work by BarndorffNielsen et al. (2010) on univariate Lévy semistationary processes to a multivariate setting and discuss the probabilistic properties of the new class of stochastic processes. Furthermore, we provide a detailed empirical study using data from the European Energy Exchange (EEX) and give new insights into the intra–daily correlation structure of electricity day–ahead prices in the EEX market. The flexible structure of MLSS processes is able to reproduce the stylized facts of such data rather well. Furthermore, these processes can be used to model negative prices in electricity markets which started to occur recently and cannot be described by many classical models. 
Keywords:  Electricity market, day–ahead prices, multivariate Lévy semistationary process, stochastic volatility, correlation, panel structure. 
JEL:  C0 C1 C5 G1 
Date:  2012–03–30 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201212&r=ecm 
By:  Arne Risa Hole (Department of Economics, The University of Sheffield); Julie Riise Kolstad (UNI Rokkan Centre, University of Bergen); Dorte GyrdHansen (Health Economics Research Unit, University of Southern Denmark) 
Abstract:  It is increasingly recognised that respondents to choice experiments employ heuristics such as attribute nonattendance (ANA) to simplify the choice tasks. This paper develops an econometric model which incorporates preference heterogeneity among respondents with different attribute processing strategies and allows the ANA probabilities to depend on the respondents' stated nonattendance. We find evidence that stated ANA is a useful indicator of the prevalence of nonattendance in the data. Contrary to previous papers in the literature we find that willingness to pay estimates derived from models which account for ANA are similar to the standard logit estimates. 
Keywords:  choice experiment; attribute nonattendance 
JEL:  C25 I10 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:shf:wpaper:2012010&r=ecm 
By:  Thomas Mayer (Department of Economics, University of California Davis) 
Abstract:  D. N. McCloskey and Stephen Ziliak have criticized economists and others for confounding statistical and substantive significance, and for committing the logical error of the transposed conditional. In doing so they sometimes misinterpret the function of significance tests. Nonetheless, economists sometimes make both of these errors – but not nearly as often as Ziliak and McCloskey claim. They also argue –incorrectly – that the existence of an effect, which is what significance tests are about, is not a scientific question. Their complaint that in testing significance economists often do not take the loss function into account is unfounded. But they are right in arguing that confidence intervals should be presented more frequently. 
Keywords:  Significance tests, ts, confidence intervals, Zilliak, McCloskey, oomph 
JEL:  C12 B4 
Date:  2012–04–20 
URL:  http://d.repec.org/n?u=RePEc:cda:wpaper:126&r=ecm 
By:  Yulei Luo; Jun Nie; Eric R. Young 
Abstract:  This technical paper considers ways to capture uncertainty in the context of socalled "statespace" models. ; Statespace models are powerful tools commonly used in macroeconomics, international economics, and finance. Statespace models can generate estimates of an underlying, ultimately unobserved variable—such as the natural rate of unemployment—based on the movements of other variables that are observed and have some relationship to the unobserved variable. The paper shows how several macroeconomic models can be mapped to the statespace framework, thus helping quantify uncertainty about the true model (model uncertainty) or about the amount of information available when decisions are made (state uncertainty). 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp1202&r=ecm 