
on Econometrics 
By:  Kim Christensen (Aarhus University and CREATES); Ulrich Hounyo (Aarhus University and CREATES); Mark Podolskij (Aarhus University and CREATES) 
Abstract:  In this paper, we propose a new way to measure and test the presence of timevarying volatility in a discretely sampled jumpdiffusion process that is contaminated by microstructure noise. We use the concept of preaveraged truncated bipower variation to construct our tstatistic, which diverges in the presence of a heteroscedastic volatility term (and has a standard normal distribution otherwise). The test is inspected in a general Monte Carlo simulation setting, where we note that in finite samples the asymptotic theory is severely distorted by infiniteactivity price jumps. To improve inference, we suggest a bootstrap approach to test the null of homoscedasticity. We prove the firstorder validity of this procedure, while in simulations the bootstrap leads to almost correctly sized tests. As an illustration, we apply the bootstrapped version of our tstatistic to a large crosssection of equity highfrequency data. We document the importance of jumprobustness, when measuring heteroscedasticity in practice. We also find that a large fraction of variation in intraday volatility is accounted for by seasonality. This suggests that, once we control for jumps and deate asset returns by a nonparametric estimate of the conventional Ushaped diurnality profile, the variance of the rescaled return series is often close to constant within the day. 
Keywords:  Bipower variation, bootstrapping, heteroscedasticity, highfrequency data, microstructure noise, preaveraging, timevarying volatility 
JEL:  C10 C80 
Date:  2016–08–30 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201627&r=ecm 
By:  Herbst, Edward; Schorfheide, Frank 
Abstract:  The accuracy of particle filters for nonlinear statespace models crucially depends on the proposal distribution that mutates time t1 particle values into time t values. In the widelyused bootstrap particle filter this distribution is generated by the statetransition equation. While straightforward to implement, the practical performance is often poor. We develop a selftuning particle filter in which the proposal distribution is constructed adaptively through a sequence of Monte Carlo steps. Intuitively, we start from a measurement error distribution with an inflated variance, and then gradually reduce the variance to its nominal level in a sequence of steps that we call tempering. We show that the filter generates an unbiased and consistent approximation of the likelihood function. Holding the run time fixed, our filter is substantially more accurate in two DSGE model applications than the bootstrap particle filter. 
Keywords:  Bayesian Analysis ; DSGE Models ; Monte Carlo Methods ; Nonlinear Filtering 
JEL:  C11 C15 E10 
Date:  2016–08–25 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgfe:201672&r=ecm 
By:  Bartalotti, Otávio C.; Calhoun, Gray; He, Yang 
Abstract:  This paper develops a novel bootstrap procedure to obtain robust biascorrected confidence intervals in regression discontinuity (RD) designs using the uniform kernel. The procedure uses a residual bootstrap from a second order local polynomial to estimate the bias of the local linear RD estimator; the bias is then subtracted from the original estimator. The biascorrected estimator is then bootstrapped itself to generate valid confidence intervals. The confidence intervals generated by this procedure are valid under conditions similar to Calonico, Cattaneo and Titiunik's (2014, Econometrica) analytical correctioni.e. when the bias of the naive regression discontinuity estimator would otherwise prevent valid inference.This paper also provides simulation evidence that our method is as accurate as the analytical corrections and we demonstrate its use through a reanalysis of Ludwig and Miller's (2008) Head Start dataset. 
Date:  2016–05–01 
URL:  http://d.repec.org/n?u=RePEc:isu:genstf:3394&r=ecm 
By:  W. Robert Reed (University of Canterbury); Aaron Smith 
Abstract:  We show that cointegration among times series paradoxically makes it more likely that a unit test will reject the unit root null hypothesis on the individual series. If one time series is cointegrated with another, then it can be written as the sum of two processes, one with a unit root and one stationary. It follows that the series cannot be represented as a finiteorder autoregressive process. Unit root tests use an autoregressive model to account for autocorrelation, so they perform poorly in this setting, even if standard methods are used to choose the number of lags. This finding implies that univariate unit root tests are of questionable use in cointegration analysis. 
Keywords:  Unit root testing, cointegration, Augmented DickeyFuller test, Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), Modified Akaike Information Criterion (MAIC) 
JEL:  C32 C22 C18 
Date:  2016–09–06 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:16/19&r=ecm 
By:  Grant Hillier (CeMMAP and University of Southampton); Federico Martellosio (University of Surrey) 
Abstract:  The paper studies spatial autoregressive models with group interaction structure, focussing on estimation and inference for the spatial autoregressive parameter \lambda. The quasimaximum likelihood estimator for \lambda usually cannot be written in closed form, but using an exact result obtained earlier by the authors for its distribution function, we are able to provide a complete analysis of the properties of the estimator, and exact inference that can be based on it, in models that are balanced. This is presented rst for the socalled pure model, with no regression component, but is also extended to some special cases of the more general model. We then study the much more dicult case of unbalanced models, giving analogues of some, but by no means all, of the results obtained for the balanced case earlier. In both balanced and unbalanced models, results obtained for the pure model generalize immediately to the model with groupspecific regression components. 
Date:  2016–05 
URL:  http://d.repec.org/n?u=RePEc:sur:surrec:0816&r=ecm 
By:  Grant Hillier (CeMMAP and University of Southampton); Federico Martellosio (University of Surrey) 
Abstract:  The (quasi) maximum likelihood estimator (QMLE) for the autoregressive parameter in a spatial autoregressive model cannot in general be written explicitly in terms of the data. The only known properties of the estimator have hitherto been its firstorder asymptotic properties (Lee, 2004, Econometrica), derived under specific assumptions on the evolution of the spatial weights matrix involved. In this paper we show that the exact cumulative distribution function of the estimator can, under mild assumptions, be written in terms of that of a particular quadratic form. A number of immediate consequences of this result are discussed, and some examples are analyzed in detail. The examples are of interest in their own right, but also serve to illustrate some unexpected features of the distribution of the MLE. In particular, we show that the distribution of the MLE may not be supported on the entire parameter space, and may be nonanalytic at some points in its support. 
JEL:  C12 C21 
Date:  2016–05 
URL:  http://d.repec.org/n?u=RePEc:sur:surrec:0716&r=ecm 
By:  Francis DiTraglia; Camilo GarcíaJimeno 
Abstract:  To estimate causal effects from observational data, an applied researcher must impose beliefs. The instrumental variables exclusion restriction, for example, represents the belief that the instrument has no direct effect on the outcome of interest. Yet beliefs about instrument validity do not exist in isolation. Applied researchers often discuss the likely direction of selection and the potential for measurement error in their papers but at present lack formal tools for incorporating this information into their analyses. As such they not only leave money on the table, by failing to use all relevant information, but more importantly run the risk of reasoning to a contradiction by expressing mutually incompatible beliefs. In this paper we characterize the sharp identified set relating instrument invalidity, treatment endogeneity, and measurement error in a workhorse linear model, showing how beliefs over these three dimensions are mutually constrained. We consider two cases: in the first the treatment is continuous and subject to classical measurement error; in the second it is binary and subject to nondifferential measurement error. In each, we propose a formal Bayesian framework to help researchers elicit their beliefs, incorporate them into estimation, and ensure their mutual coherence. We conclude by illustrating the usefulness of our proposed methods on a variety of examples from the empirical microeconomics literature. 
JEL:  B23 B4 C10 C11 C16 C26 C8 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:22621&r=ecm 
By:  Manabu Asai (Soka University, Japan); ChiaLin Chang (National Chung Hsing University, Taiwan); Michael McAleer (National Tsing Hua University, Taiwan; Erasmus School of Economics, Erasmus University Rotterdam; Complutense University of Madrid, Spain; Yokohama National University, Japan) 
Abstract:  The paper develops a novel realized matrixexponential stochastic volatility model of multivariate returns and realized covariances that incorporates asymmetry and long memory (hereafter the RMESVALM model). The matrix exponential transformation guarantees the positivedefiniteness of the dynamic covariance matrix. The contribution of the paper ties in with Robert Basmann’s seminal work in terms of the estimation of highly nonlinear model specifications (“Causality tests and observationally equivalent representations of econometric models”, Journal of Econometrics , 1988, 39(12), 69–104), especially for developing tests for leverage and spillover effects in the covariance dynamics. Efficient importance sampling is used to maximize the likelihood function of RMESVALM, and the finite sample properties of the quasimaximum likelihood estimator of the parameters are analysed. Using high frequency data for three US financial assets, the new model is estimated and evaluated. The forecasting performance of the new model is compared with a novel dynamic realized matrixexponential conditional covariance model. The volatility and covolatility spillovers are examined via the news impact curves and the impulse response functions from returns to volatility and covolatility. 
Keywords:  Matrixexponential transformation; Realized stochastic covariances; Realized conditional covariances; Asymmetry; Long memory; Spillovers; Dynamic covariance matrix; Finite sample properties; Forecasting performance 
JEL:  C22 C32 C58 G32 
Date:  2016–09–12 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20160076&r=ecm 
By:  Oepping, Hardy 
Abstract:  This paper presents an approach to mapping a process model onto a Bayesian network resulting in a Bayesian Process Network, which will be applied to process risk analysis. Exemplified by the model of Eventdriven Process Chains, it is demonstrated how a process model can be mapped onto an isomorphic Bayesian network, thus creating a Bayesian Process Network. Process events, functions, objects, and operators are mapped onto random variables, and the causal mechanisms between these are represented by appropriate conditional probabilities. Since process risks can be regarded as deviations of the process from its reference state, all process risks can be mapped onto risk states of the random variables. By example, we show how process risks can be specified, evaluated, and analysed by means of a Bayesian Process Network. The results reveal that the approach presented herein is a simple technique for enabling systemic process risk analysis because the Bayesian Process Network can be designed solely on the basis of an existing process model. 
Keywords:  process models; process modelling; process chains; risk management; risk analysis; risk assessment; risk models; Bayesian networks; isomorphic mapping 
JEL:  C11 L23 M10 M11 
Date:  2016–09–07 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:73611&r=ecm 
By:  Yukitoshi Matsushita 
Abstract:  Hahn and Ridder (2013) formulated influence functions of semiparametric three step estimators where generated regressors are computed in the first step. This class of estimators covers several important examples for empirical analysis, such as production function estimators by Olley and Pakes (1996), and propensity score matching estimators for treatment effects by Heckman, Ichimura and Todd (1998). This paper develops a nonparametric likelihood based inference method for the parameters in such three step estimation problems. By modifying the moment functions to account for influences from the first and second step estimation, the resulting likelihood ratio statistic becomes asymptotically pivotal not only without estimating the asymptotic variance but also without undersmoothing. 
Keywords:  generated regressor, empirical likelihood 
JEL:  C12 C14 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:587&r=ecm 
By:  Xu, Ning; Hong, Jian; Fisher, Timothy 
Abstract:  In this paper, we study the generalization ability (GA)the ability of a model to predict outcomes in new samples from the same populationof the extremum estimators. By adapting the classical concentration inequalities, we propose upper bounds for the empirical outofsample prediction error for extremum estimators, which is a function of the insample error, the severity of heavy tails, the sample size of insample data and model complexity. The error bounds not only serve to measure GA, but also to illustrate the tradeoff between insample and outofsample fit, which is connected to the traditional biasvariance tradeoff. Moreover, the bounds also reveal that the hyperparameter K, the number of folds in $K$fold crossvalidation, cause the biasvariance tradeoff for crossvalidation error, which offers a route to hyperparameter optimization in terms of GA. As a direct application of GA analysis, we implement the new upper bounds in penalized regression estimates for both n>p and n 
Keywords:  generalization ability, upper bound of generalization error, penalized regression, biasvariance tradeoff, lasso, highdimensional data, crossvalidation, $\mathcal{L}_2$ difference between penalized and unpenalized regression 
JEL:  C13 C52 C55 
Date:  2016–09–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:73622&r=ecm 
By:  Aiste Ruseckaite (Erasmus University Rotterdam, the Netherlands); Dennis Fok (Erasmus University Rotterdam, the Netherlands); Peter Goos (KU Leuven, Belgium) 
Abstract:  Many products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture proportions matter for the outcome variable. In such cases, mixture models suffice. In other scenarios, the total amount of the mixture matters as well. In these cases, one needs mixture amount models. As an example, consider advertisers who have to decide on the advertising media mix (e.g. 30% of the expenditures on TV advertising, 10% on radio and 60% on online advertising) as well as on the total budget of the entire campaign. To model mixtureamount data, the current strategy is to express the response in terms of the mixture proportions and specify mixture parameters as parametric functions of the amount. However, specifying the functional form for these parameters may not be straightforward, and using a flexible functional form usually comes at the cost of a large number of parameters. In this paper, we present a new modeling approach which is flexible but parsimonious in the number of parameters. The model is based on socalled Gaussian processes and avoids the necessity to apriori specify the shape of the dependence of the mixture parameters on the amount. We show that our model encompasses two commonly used model specifications as extreme cases. Finally, we demonstrate the model’s added value when compared to standard models for mixtureamount data. We consider two applications. The first one deals with the reaction of mice to mixtures of hormones applied in different amounts. The second one concerns the recognition of advertising campaigns. The mixture here is the particular media mix (TV and magazine advertising) used for a campaign. As the total amount variable, we consider the total advertising campaign exposure. 
Keywords:  Gaussian process prior; Nonparametric Bayes; Advertising mix; In gredient proportions; Mixtures of ingredients 
JEL:  C01 C02 C11 C14 C51 C52 
Date:  2016–09–12 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20160075&r=ecm 
By:  Adrian Pagan (Sydney Uni) 
Abstract:  This note shows that the common practice of adding on measurement errors or "errors in variables" when estimating DSGE models can imply that there is a lack of cointegration between model and data variables and also between data variables themselves. An analysis is provided of what the nature of the measurement error would be if it was desired to ensure cointegration. It is very unlikely that it would be the white noise shocks that are commonly used. 
Keywords:  DSGE models, shocks 
Date:  2016–09–12 
URL:  http://d.repec.org/n?u=RePEc:qut:auncer:2016_05&r=ecm 