
on Econometrics 
By:  Manuabu Asai; Michael McAleer (University of Canterbury); Marcelo C. Medeiros 
Abstract:  Several methods have recently been proposed in the ultra high frequency financial literature to remove the effects of microstructure noise and to obtain consistent estimates of the integrated volatility (IV) as a measure of expost daily volatility. Even biascorrected and consistent (modified) realized volatility (RV) estimates of the integrated volatility can contain residual microstructure noise and other measurement errors. Such noise is called “realized volatility error”. Since such measurement errors are ignored, we need to take account of them in estimating and forecasting IV. This paper investigates through Monte Carlo simulations the effects of RV errors on estimating and forecasting IV with RV data. It is found that: (i) neglecting RV errors can lead to serious bias in estimators due to model misspecification; (ii) the effects of RV errors on onestep ahead forecasts are minor when consistent estimators are used and when the number of intraday observations is large; and (iii) even the partially corrected recently proposed in the literature should be fully corrected for evaluating forecasts. This paper proposes a full correction of , which can be applied to linear and nonlinear, short and long memory models. An empirical example for S&P 500 data is used to demonstrate that neglecting RV errors can lead to serious bias in estimating the model of integrated volatility, and that the new method proposed here can eliminate the effects of the RV noise. The empirical results also show that the full correction for is necessary for an accurate description of goodnessoffit. 
Keywords:  Realized volatility; diffusion; financial econometrics; measurement errors; forecasting; model evaluation; goodnessoffit 
Date:  2010–05–01 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:10/21&r=ecm 
By:  Krajina, A. (Tilburg University) 
Abstract:  AN MESTIMATOR OF TAIL DEPENDENCE. Extreme value theory is the part of probability and statistics that provides the theoretical background for modeling events that almost never happen. The estimation of the dependence between two or more such unlikely events (tail dependence) is the topic of this thesis. The tail dependence structure is modeled by the stable tail dependence function. In Chapter 2 a semiparametric model is considered in which the stable tail dependence function is parametrically modeled. A method of moments estimator of the unknown parameter is proposed, where an integral of a nonparametric, rankbased estimator of the stable tail dependence function is matched with the corresponding parametric version. This estimator is applied in Chapter 3 to estimate the tail dependence structure of the family of metaelliptical distributions. The estimator introduced in Chapter 2 is extended in two respects in Chapter 4: (i) the number of variables is arbitrary; (ii) the number of moment equations can exceed the dimension of the parameter space. This estimator is defined as the value of the parameter vector that minimizes the distance between a vector of weighted integrals of the tail dependence function on the one hand and empirical counterparts of these integrals on the other hand. The method, not being likelihood based, applies to discrete and continuous models alike. Under minimal conditions all estimators introduced are consistent and asymptotically normal. The performance and applicability of the estimators is demonstrated by examples. 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:ner:tilbur:urn:nbn:nl:ui:123969610&r=ecm 
By:  Chernozhukov, V.; Lee, S.; Rosen, A.M. 
Abstract:  We develop a practical and novel method for inference on intersection bounds, namely bounds defined by either the infimum or supremum of a parametric or nonparametric function, or equivalently, the value of a linear programming problem with a potentially infinite constraint set. Our approach is especially convenient in models comprised of a continuum of inequalities that are separable in parameters, and also applies to models with inequalities that are nonseparable in parameters. Since analog estimators for intersection bounds can be severely biased in finite samples, routinely underestimating the length of the identified set, we also offer a (downward/upward) median unbiased estimator of these (upper/lower) bounds as a natural byproduct of our inferential procedure. Furthermore, our method appears to be the first and currently only method for inference in nonparametric models with a continuum of inequalities. We develop asymptotic theory for our method based on the strong approximation of a sequence of studentized empirical processes by a sequence of Gaussian or other pivotal processes. We provide conditions for the use of nonparametric kernel and series estimators, including a novel result that establishes strong approximation for general series estimators, which may be of independent interest. We illustrate the usefulness of our method with Monte Carlo experiments and an empirical example. 
Date:  2009–07 
URL:  http://d.repec.org/n?u=RePEc:ner:ucllon:http://eprints.ucl.ac.uk/17152/&r=ecm 
By:  Horowitz, J.; Lee, S. 
Abstract:  This paper is concerned with developing uniform confidence bands for functions estimated nonparametrically with instrumental variables. We show that a sieve nonparametric instrumental variables estimator is pointwise asymptotically normally distributed. The asymptotic normality result holds in both mildly and severely illposed cases. We present an interpolation method to obtain a uniform confidence band and show that the bootstrap can be used to obtain the required critical values. Monte Carlo experiments illustrate the finitesample performance of the uniform confidence band. 
Date:  2009–07 
URL:  http://d.repec.org/n?u=RePEc:ner:ucllon:http://eprints.ucl.ac.uk/18216/&r=ecm 
By:  Parker, Thomas 
Abstract:  The Wald, likelihood ratio and Lagrange multiplier test statistics are commonly used to test linear restrictions in regression models. It is shown that for testing these restrictions in the classical regression model, the exact densities of these test statistics are special cases of the generalized beta distribution introduced by McDonald (1984); McDonald and Xu (1995a). This unified derivation provides a method by which one can derive small sample critical values for each test. These results may be indicative of the behavior of such test statistics in more general settings, and are useful in visualizing how each statistic changes with different parameter values in the simple regression model. For example, the results suggest that Wald tests may severely underreject the null hypothesis when the sample size is small or a large number of restrictions are tested. 
Keywords:  Test of linear restrictions; Generalized beta distribution; Smallsample probability distribution; Regression model 
JEL:  C12 
Date:  2010–05–20 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:22841&r=ecm 
By:  Michael McAleer (University of Canterbury); Marcelo C. Medeiros 
Abstract:  In this paper we consider a nonlinear model based on neural networks as well as linear models to forecast the daily volatility of the S&P 500 and FTSE 100 futures. As a proxy for daily volatility, we consider a consistent and unbiased estimator of the integrated volatility that is computed from high frequency intraday returns. We also consider a simple algorithm based on bagging (bootstrap aggregation) in order to specify the models analyzed. 
Keywords:  Financial econometrics; volatility forecasting; neural networks; nonlinear models; realized volatility; bagging 
Date:  2010–05–01 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:10/28&r=ecm 
By:  Rousseau, Judith 
Abstract:  In this paper, we investigate the asymptotic properties of nonparametric Bayesian mixtures of Betas for estimating a smooth density on [0, 1]. We consider a parametrization of Beta distributions in terms of mean and scale parameters and construct a mixture of these Betas in the mean parameter, while putting a prior on this scaling parameter. We prove that such Bayesian nonparametric models have good frequentist asymptotic properties. We determine the posterior rate of concentration around the true density and prove that it is the minimax rate of concentration when the true density belongs to a Hölder class with regularity β, for all positive β, leading to a minimax adaptive estimating procedure of the density. We also believe that the approximating results obtained on these mixtures of Beta densities can be of interest in a frequentist framework. 
Keywords:  kernel; adaptive estimation; mixtures of Betas; rates of convergence; Bayesian nonparametric; 
JEL:  C11 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:ner:dauphi:urn:hdl:123456789/3984&r=ecm 
By:  Matias D. Cattaneo (University of Michigan); Richard K. Crump (Federal Reserve Bank of New York); Michael Jansson (UC Berkeley and CREATES) 
Abstract:  Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrapbased inference procedures associated with the kernelbased densityweighted averaged derivative estimator proposed by Powell, Stock, and Stoker (1989). In many cases validity of bootstrapbased inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust" variance estimator derived from the "small bandwidth" asymptotic framework. The results of a smallscale Monte Carlo experiment are found to be consistent with the theory and indicate in particular that sensitivity with respect to the bandwidth choice can be ameliorated by using the "robust"variance estimatorClassificationJEL: C12, C14, C21, C24 
Keywords:  Averaged derivatives, Bootstrap, Small bandwidth asymptotics 
Date:  2010–05–17 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201023&r=ecm 
By:  Guorui Bian; Michael McAleer (University of Canterbury); WingKeung Wong 
Abstract:  This paper develops a new test, the trinomial test, for pairwise ordinal data samples to improve the power of the sign test by modifying its treatment of zero differences between observations, thereby increasing the use of sample information. Simulations demonstrate the power superiority of the proposed trinomial test statistic over the sign test in small samples in the presence of tie observations. We also show that the proposed trinomial test has substantially higher power than the sign test in large samples and also in the presence of tie observations, as the sign test ignores information from observations resulting in ties. 
Keywords:  Sign test; trinomial test; nonparametric test; ties; test statistics; hypothesis testing 
JEL:  C12 C14 C15 
Date:  2010–05–06 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:10/20&r=ecm 
By:  Kim P. Huynh (Department of Economics, Indiana University); Luke Ignaczak (Department of Economics, Carleton University); MarcelCristian Voia (Department of Economics, Carleton University) 
Abstract:  This note investigates the behavior of stochastic dominance tests of censored distributions which are dependent on nuisance parameters. In particular, we consider finite mixture distributions that are subject to exogenous censoring. To deal with this potential problem, critical values of the proposed tests statistics are calculated using a parametric bootstrap. The tests are then applied to compare differences between dis tributions of incomplete employment spells with different levels of censoring obtained from Canadian General Social Survey data. The size of the proposed test statistics is computed using fitted GSS data. 
Keywords:  Stochastic Dominance Tests, Parametric Bootstrap, Censored Distributions, Finite Mixtures 
JEL:  C14 C12 C16 C41 
Date:  2010–01–21 
URL:  http://d.repec.org/n?u=RePEc:car:carecp:1002&r=ecm 
By:  Rosen, A. 
Abstract:  This paper studies the identifying power of conditional quantile restrictions in short panels with fixed effects. In contrast to classical fixed effects models with conditional mean restrictions, conditional quantile restrictions are not preserved by taking differences in the regression equation over time. This paper shows however that a conditional quantile restriction, in conjunction with a weak conditional independence restriction, provides bounds on quantiles of differences in timevarying unobservables across periods. These bounds carry observable implications for model parameters which generally result in set identification. The analysis of these bounds includes conditions for point identification of the parameter vector, as well as weaker conditions that result in identification of individual parameter components. 
Date:  2009–09 
URL:  http://d.repec.org/n?u=RePEc:ner:ucllon:http://eprints.ucl.ac.uk/18230/&r=ecm 
By:  Søren Johansen (University of Copenhagen and CREATES); Morten Ørregaard Nielsen (Queen's University and CREATES) 
Abstract:  We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X_{t} to be fractional of order d and cofractional of order db; that is, there exist vectors β for which β′X_{t} is fractional of order db. The parameters d and b satisfy either d≥b≥1/2, d=b≥1/2, or d=d_{0}≥b≥1/2. Our main technical contribution is the proof of consistency of the maximum likelihood estimators on the set 1/2≤b≤d≤d_{1} for any d_{1}≥d_{0}. To this end, we consider the conditional likelihood as a stochastic process in the parameters, and prove that it converges in distribution when errors are i.i.d. with suitable moment conditions and initial values are bounded. We then prove that the estimator of β is asymptotically mixed Gaussian and estimators of the remaining parameters are asymptotically Gaussian. We also find the asymptotic distribution of the likelihood ratio test for cointegration rank, which is a functional of fractional Brownian motion of type II. 
Keywords:  Cofractional processes, cointegration rank, fractional cointegration, likelihood inferencw, vector autoregressive model 
JEL:  C32 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1237&r=ecm 
By:  Jin Zhang; Wing Long Ng 
Abstract:  In recent years, copulas have become very popular in financial research and actuarial science as they are more flexible in modelling the comovements and relationships of risk factors as compared to the conventional linear correlation coefficient by Pearson. However, a precise estimation of the copula parameters is vital in order to correctly capture the (possibly nonlinear) dependence structure and joint tail events. In this study, we employ two optimization heuristics, namely Differential Evolution and Threshold Ac cepting to tackle the parameter estimation of multivariate t distribution models in the EML approach. Since the evolutionary optimizer does not rely on gradient search, the EML approach can be applied to estimation of more complicated copula models such as highdimensional copulas. Our experimental study shows that the proposed method provides more robust and more accurate estimates as compared to the IFM approach. 
Keywords:  Copula Models, Parameter Inference, Exactly Maximum Likelihood, Differential Evolution, Threshold Accepting 
Date:  2010–05–17 
URL:  http://d.repec.org/n?u=RePEc:com:wpaper:038&r=ecm 
By:  Chen, L.Y.; Szroeter, J. 
Abstract:  Econometric inequality hypotheses arise in diverse ways. Examples include concavity restrictions on technological and behavioural functions, monotonicity and dominance relations, onesided constraints on conditional moments in GMM estimation, bounds on parameters which are only partially identified, and orderings of predictive performance measures for competing models. In this paper we set forth four key properties which tests of multiple inequality constraints should ideally satisfy. These are (1) (asymptotic) exactness, (2) (asymptotic)similarity on the boundary, (3) absence of nuisance parameters from the asymptotic null distribution of the test statistic, (4) low computational complexity and boostrapping cost. We observe that the predominant tests currently used in econometrics do not appear to enjoy all these properties simultaneously. We therefore ask the question : Does there exist any nontrivial test which, as a mathematical fact, satisfies the first three properties and, by any reasonable measure, satisfies the fourth ? Remarkably the answer is affirmative. The paper demonstrates this constructively. We introduce a method of test construction called chaining which begins by writing multiple inequalities as a single equality using zeroone indicator functions. We then smooth the indicator functions. The approximate equality thus obtained is the basis of a wellbehaved test. This test may be considered as the baseline of a wider class of tests. A full asymptotic theory is provided for the baseline. Simulation results show that the finitesample performance of the test matches the theory quite well. 
Date:  2009–06 
URL:  http://d.repec.org/n?u=RePEc:ner:ucllon:http://eprints.ucl.ac.uk/18215/&r=ecm 
By:  Wegmann , Bertil (Department of Statistics); Villani, Mattias (Research Department, Central Bank of Sweden) 
Abstract:  Structural econometric auction models with explicit gametheoretic modeling of bidding strategies have been quite a challenge from a methodological perspective, especially within the common value framework. We develop a Bayesian analysis of the hierarchical Gaussian common value model with stochastic entry introduced by Bajari and Hortaçsu (2003). A key component of our approach is an accurate and easily interpretable analytical approximation of the equilibrium bid function, resulting in a fast and numerically stable evaluation of the likelihood function. The analysis is also extended to situations with positive valuations using a hierarchical Gamma model. We use a Bayesian variable selection algorithm that simultaneously samples the posterior distribution of the model parameters and does inference on the choice of covariates. The methodology is applied to simulated data and to a carefully collected dataset from eBay with bids and covariates from 1000 coin auctions. It is demonstrated that the Bayesian algorithm is very efficient and that the approximation error in the bid function has virtually no effect on the model inference. Both models fit the data well, but the Gaussian model outperforms the Gamma model in an outofsample forecasting evaluation of auction prices. 
Keywords:  Bid function approximation; eBay; Internet auctions; Likelihood inference; Markov Chain Monte Carlo; Normal valuations; Variable selection 
JEL:  C11 C52 C53 
Date:  2010–05–01 
URL:  http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0242&r=ecm 
By:  Joshua C.C. Chan; Garry Koop; Roberto Leon Gonzales; Rodney W. Strachan 
Abstract:  Abstract: Time varying parameter (TVP) models have enjoyed an increasing popularity in empirical macroeconomics. However, TVP models are parameterrich and risk overfitting unless the dimension of the model is small. Motivated by this worry, this paper proposes several Time Varying dimension (TVD) models where the dimension of the model can change over time, allowing for the model to automatically choose a more parsimonious TVP representation, or to switch between different parsimonious representations. Our TVD models all fall in the category of dynamic mixture models. We discuss the properties of these models and present methods for Bayesian inference. An application involving US in.ation forecasting illustrates and compares the different TVD models. We find our TVD approaches exhibit better forecasting performance than several standard benchmarks and shrink towards parsimonious specifications. 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:acb:cbeeco:2010523&r=ecm 
By:  Lee, S.; Whang, Y.J. 
Abstract:  We develop a general class of nonparametric tests for treatment effects conditional on covariates. We consider a wide spectrum of null and alternative hypotheses regarding conditional treatment effects, including (i) the null hypothesis of the conditional stochastic dominance between treatment and control groups; ii) the null hypothesis that the conditional average treatment effect is positive for each value of covariates; and (iii) the null hypothesis of no distributional (or average) treatment effect conditional on covariates against a onesided (or twosided) alternative hypothesis. The test statistics are based on L1type functionals of uniformly consistent nonparametric kernel estimators of conditional expectations that characterize the null hypotheses. Using the Poissionization technique of Giné et al. (2003), we show that suitably studentized versions of our test statistics are asymptotically standard normal under the null hypotheses and also show that the proposed nonparametric tests are consistent against general fixed alternatives. Furthermore, it turns out that our tests have nonnegligible powers against some local alternatives that are n−½ different from the null hypotheses, where n is the sample size. We provide a more powerful test for the case when the null hypothesis may be binding only on a strict subset of the support and also consider an extension to testing for quantile treatment effects. We illustrate the usefulness of our tests by applying them to data from a randomized, job training program (LaLonde, 1986) and by carrying out Monte Carlo experiments based on this dataset. 
Date:  2009–12–06 
URL:  http://d.repec.org/n?u=RePEc:ner:ucllon:http://eprints.ucl.ac.uk/18904/&r=ecm 
By:  Marta Bańbura (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Michele Modugno (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.) 
Abstract:  In this paper we propose a methodology to estimate a dynamic factor model on data sets with an arbitrary pattern of missing data. We modify the Expectation Maximisation (EM) algorithm as proposed for a dynamic factor model by Watson and Engle (1983) to the case with general pattern of missing data. We also extend the model to the case with serially correlated idiosyncratic component. The framework allows to handle efficiently and in an automatic manner sets of indicators characterized by different publication delays, frequencies and sample lengths. This can be relevant e.g. for young economies for which many indicators are compiled only since recently. We also show how to extract a model based news from a statistical data release within our framework and we derive the relationship between the news and the resulting forecast revision. This can be used for interpretation in e.g. nowcasting applications as it allows to determine the sign and size of a news as well as its contribution to the revision, in particular in case of simultaneous data releases. We evaluate the methodology in a Monte Carlo experiment and we apply it to nowcasting and backdating of euro area GDP. JEL Classification: C53, E37. 
Keywords:  Factor Models, Forecasting, Large CrossSections, Missing data, EM algorithm. 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20101189&r=ecm 
By:  Chesher, A. 
Abstract:  This paper studies single equation models for binary outcomes incorporating instrumental variable restrictions. The models are incomplete in the sense that they place no restriction on the way in which values of endogenous variables are generated. The models are set, not point, identifying. The paper explores the nature of set identification in single equation IV models in which the binary outcome is determined by a threshold crossing condition. There is special attention to models which require the threshold crossing function to be a monotone function of a linear index involving observable endogenous and exogenous explanatory variables. Identified sets can be large unless instrumental variables have substantial predictive power. A generic feature of the identified sets is that they are not connected when instruments are weak. The results suggest that the strong point identifying power of triangular "control function" models  restricted versions of the IV models considered here  is fragile, the wide expanses of the IV model's identified set awaiting in the event of failure of the triangular model's restrictions. 
Date:  2009–08 
URL:  http://d.repec.org/n?u=RePEc:ner:ucllon:http://eprints.ucl.ac.uk/18226/&r=ecm 
By:  Manabu Asai; Massimiliano Caporin; Michael McAleer (University of Canterbury) 
Abstract:  Most multivariate variance models suffer from a common problem, the “curse of dimensionality”. For this reason, most are fitted under strong parametric restrictions that reduce the interpretation and flexibility of the models. Recently, the literature has focused on multivariate models with milder restrictions, whose purpose was to combine the need for interpretability and efficiency faced by model users with the computational problems that may emerge when the number of assets is quite large. We contribute to this strand of the literature proposing a blocktype parameterization for multivariate stochastic volatility models. 
Keywords:  Block structures; multivariate stochastic volatility; curse of dimensionality 
JEL:  C32 C51 C10 
Date:  2010–05–01 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:10/24&r=ecm 
By:  Dayton M. Lambert (Dept. of Agricultural Economics, University of Tennessee); Jason P. Brown (USDA, Economic Research Service, Washington, D.C.); Raymond J.G.M. Florax (Dept. of Agricultural Economics, Purdue University) 
Abstract:  Several spatial econometric approaches are available to model spatially correlated disturbances in count models, but there are at present no structurally consistent count models incorporating spatial lag autocorrelation. A twostep, limited information maximum likelihood estimator is proposed to fill this gap. The estimator is developed assuming a Poisson distribution, but can be extended to other count distributions. The small sample properties of the estimator are evaluated with Monte Carlo experiments. Simulation results suggest that the spatial lag count estimator achieves gains in terms of bias over the aspatial version as spatial lag autocorrelation and sample size increase. An empirical example deals with the location choice of singleunit startup firms in the manufacturing industry in the US between 2000 and 2004. The empirical results suggest that in the dynamic process of firm formation, counties dominated by firms exhibiting (internal) increasing returns to scale are at a relative disadvantage even if localization economies are present 
Keywords:  count model, location choice, manufacturing, Poisson, spatial econometrics 
JEL:  C21 C25 D21 R12 R30 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:pae:wpaper:105&r=ecm 
By:  Heather M Anderson; Farshid Vahid 
Abstract:  This paper argues that VAR models with cointegration and common cycles can be usefully viewed as observable factor models. The factors are linear combinations of lagged levels and lagged differences, and as such, these observable factors have potential for forecasting. We illustrate this forecast potential in both a Monte Carlo and empirical setting, and demonstrate the difficulties in developing forecasting "rules of thumb" for forecasting in multivariate systems. 
Keywords:  Common factors, Cross equation restrictions, Multivariate forecasting, Reduced rank models. 
JEL:  C32 C53 E37 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201014&r=ecm 
By:  Kitagawa, T. 
Abstract:  This paper examines identification power of the instrument exogeneity assumption in the treatment effect model. We derive the identification region: The set of potential outcome distributions that are compatible with data and the model restriction. The model restrictions whose identifying power is investigated are (i)instrument independence of each of the potential outcome (marginal independence), (ii) instrument joint independence of the potential outcomes and the selection heterogeneity, and (iii) instrument monotonicity in addition to (ii) (the LATE restriction of Imbens and Angrist (1994)), where these restrictions become stronger in the order of listing. By comparing the size of the identification region under each restriction, we show that the joint independence restriction can provide further identifying information for the potential outcome distributions than marginal independence, but the LATE restriction never does since it solely constrains the distribution of data. We also derive the tightest possible bounds for the average treatment effects under each restriction. Our analysis covers both the discrete and continuous outcome case, and extends the treatment effect bounds of Balke and Pearl(1997) that are available only for the binary outcome case to a wider range of settings including the continuous outcome case. 
Date:  2009–10 
URL:  http://d.repec.org/n?u=RePEc:ner:ucllon:http://eprints.ucl.ac.uk/18261/&r=ecm 
By:  M. Hashem Pesaran (Cambridge University, Faculty of Economics, Austin Robinson Building, Sidgwick Avenue, Cambridge, CB3 9DD, United Kingdom.); Alexander Chudik (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.) 
Abstract:  This paper extends the analysis of in finite dimensional vector autoregressive models (IVAR) proposed in Chudik and Pesaran (2010) to the case where one of the variables or the cross section units in the IVAR model is dominant or pervasive. This extension is not straightforward and involves several technical difficulties. The dominant unit influences the rest of the variables in the IVAR model both directly and indirectly, and its effects do not vanish even as the dimension of the model (N) tends to in nity. The dominant unit acts as a dynamic factor in the regressions of the nondominant units and yields an infi nite order distributed lag relationship between the two types of units. Despite this it is shown that the effects of the dominant unit as well as those of the neighborhood units can be consistently estimated by running augmented least squares regressions that include distributed lag functions of the dominant unit. The asymptotic distribution of the estimators is derived and their small sample properties investigated by means of Monte Carlo experiments. JEL Classification: C10, C33, C51. 
Keywords:  IVAR Models, Dominant Units, Large Panels, Weak and Strong Cross Section Dependence, Factor Models. 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20101194&r=ecm 
By:  Sébastien Laurent; Jeroen V.K. Rombouts; Francesco Violante 
Abstract:  This paper addresses the question of the selection of multivariate GARCH models in terms of variance matrix forecasting accuracy with a particular focus on relatively large scale problems. We consider 10 assets from NYSE and NASDAQ and compare 125 model based onestepahead conditional variance forecasts over a period of 10 years using the model confidence set (MCS) and the Superior Predictive Ability (SPA) tests. Model performances are evaluated using four statistical loss functions which account for different types and degrees of asymmetry with respect to over/under predictions. When considering the full sample, MCS results are strongly driven by short periods of high market instability during which multivariate GARCH models appear to be inaccurate. Over relatively unstable periods, i.e. dotcom bubble, the set of superior models is composed of more sophisticated specifications such as orthogonal and dynamic conditional correlation (DCC), both with leverage effect in the conditional variances. However, unlike the DCC models, our results show that the orthogonal specifications tend to underestimate the conditional variance. Over calm periods, a simple assumption like constant conditional correlation and symmetry in the conditional variances cannot be rejected. Finally, during the 20072008 financial crisis, accounting for nonstationarity in the conditional variance process generates superior forecasts. The SPA test suggests that, independently from the period, the best models do not provide significantly better forecasts than the DCC model of Engle (2002) with leverage in the conditional variances of the returns. 
Keywords:  Variance matrix, forecasting, multivariate GARCH, loss function, model confidence set, superior predictive ability 
JEL:  C10 C32 C51 C52 C53 G10 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:lvl:lacicr:1021&r=ecm 
By:  Ivan Nourdin (Laboratoire de Probabilités et Modèles Aléatoires, Université Pierre et Marie Curie); Giovanni Peccati (University of Luxembourg); Mark Podolskij (ETH Zürich and CREATES) 
Abstract:  We consider sequences of random variables of the type $S_n= n^{1/2} \sum_{k=1}^n f(X_k)$, $n\geq 1$, where $X=(X_k)_{k\in \Z}$ is a $d$dimensional Gaussian process and $f: \R^d \rightarrow \R$ is a measurable function. It is known that, under certain conditions on $f$ and the covariance function $r$ of $X$, $S_n$ converges in distribution to a normal variable $S$. In the present paper we derive several explicit upper bounds for quantities of the type $\E[h(S_n)] \E[h(S)]$, where $h$ is a sufficiently smooth test function. Our methods are based on Malliavin calculus, on interpolation techniques and on the Stein's method for normal approximation. The bounds deduced in our paper depend only on $\E[f^2(X_1)]$ and on simple infinite series involving the components of $r$. In particular, our results generalize and refine some classic CLTs by BreuerMajor, GiraitisSurgailis and Arcones, concerning the normal approximation of partial sums associated with Gaussiansubordinated timeseries. 
Keywords:  BerryEsseen bounds, BreuerMajor central limit theorems, Gaussian processes, Interpolation, Malliavin calculus, Stein’s method 
JEL:  C60 C10 
Date:  2010–05–12 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201022&r=ecm 
By:  Charles J. Romeo (Economic Analysis Group, Antitrust Division, U.S. Department of Justice) 
Abstract:  The random parameters logit model for aggregate data introduced by Berry, Levinsohn, and Pakes (1995) has been a driving force in empirical industrial organization for more than a decade. While these models are identified in theory, identification problems often occur in practice. In this paper we introduce the means of included demographics as a new set of readily available instruments that have the potential to substantially improve numerical performance in a variety of contexts. We use a set of endogenous price simulations to demonstrate that they are valid, and we use a real data illustration to demonstrate that they improve the numerical properties of the GMM objective function. In addition, we develop a metric that decomposes the explanatory power of the model into the proportion of market share variation that is explained by mean utility and that which is explained by the heterogeneity specification. 
Keywords:  random coefficients, instrumental variables, identification, GMM, Beer 
JEL:  C33 C35 L66 
Date:  2010–04 
URL:  http://d.repec.org/n?u=RePEc:doj:eagpap:201003&r=ecm 
By:  Massimiliano Bratti (Department of Economics, Business and Statistics, Universita degli Studi di Milano, Via Conservatorio~7, I20122, Milan, Italy.); Alfonso Miranda (Department of Quantitative Social Science, Institute of Education, University of London. 20 Bedford Way, London WC1H 0AL, UK.) 
Abstract:  In this paper we propose a method to estimate models in which an endogenous dichotomous treatment affects a count outcome in the presence of either sample selection or endogenous participation using maximum simulated likelihood. We allow for the treatment to have an effect on both the sample selection or the participation rule and the main outcome. Applications of this model are frequent in many fields of economics, such as health, labor, and population economics. We show the performance of the model using data from Kenkel and Terza (2001), which investigates the effect of physician advice on the amount of alcohol consumption. Our estimates suggest that in these data (i) neglecting treatment endogeneity leads to a perversely signed effect of physician advice on drinking intensity, (ii) neglecting endogenous participation leads to an upward biased estimator of the treatment effect of physician advice on drinking intensity. 
Keywords:  count data, drinking, endogenous participation, maximum simulated likelihood, sample selection, treatment effects 
JEL:  C35 I12 I21 
Date:  2010–05–21 
URL:  http://d.repec.org/n?u=RePEc:qss:dqsswp:1005&r=ecm 
By:  Chesher, A.; Smolinski, K. 
Abstract:  This paper studies single equation instrumental variable models of ordered choice in which explanatory variables may be endogenous. The models are weakly restrictive, leaving unspecified the mechanism that generates endogenous variables. These incomplete models are set, not point, identifying for parametrically (e.g. ordered probit) or nonparametrically specified structural functions. The paper gives results on the properties of the identified set for the case in which potentially endogenous explanatory variables are discrete. The results are used as the basis for calculations showing the rate of shrinkage of identified sets as the number of classes in which the outcome is categorised increases. 
Date:  2009–12–06 
URL:  http://d.repec.org/n?u=RePEc:ner:ucllon:http://eprints.ucl.ac.uk/18905/&r=ecm 
By:  Razzak, Weshah 
Abstract:  Unanticipated shocks could lead to instability, which is reflected in statistically significant changes in distributions of independent Gaussian random variables. Changes in the conditional moments of stationary variables are predictable. We provide a framework based on a statistic for the Sample Generalized Variance, which is useful for interrogating real time data and to predicting statistically significant sudden and large shifts in the conditional variance of a vector of correlated macroeconomic variables. Central banks can incorporate the framework in the policy making process. 
Keywords:  Sample Generalized Variance; Conditional Variance; Sudden and Large Shifts in the Moments 
JEL:  E66 C3 C1 
Date:  2010–05–19 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:22804&r=ecm 
By:  Schluter, C. 
Abstract:  The received wisdom about inference problems for inequality measures is that these are caused by the presence of extremes in samples drawn from heavytailed distributions. We show that this is incorrect since the density of the studentised inequality measure is heavily skewed to the left, and the excessive coverage failures of the usual confidence intervals are associated with low estimates of both the point measure and the variance. For further diagnostics the coefficients of bias, skewness and kurtosis are derived for both studentised and standardised inequality measures, and the explicit cumulant expansions make also available Edgeworth expansions and saddlepoint approximations. In view of the key role played by the estimated variance of the measure, variance stabilising transforms are considered and shown to improve inference. <br><br> Keynames; Inequality measures, inference, statistical performance, asymptotic expansions, variance stabilisation. <br><br> JEL Classification: C10, D31, D63. 
Date:  2010–05–01 
URL:  http://d.repec.org/n?u=RePEc:stn:sotoec:1009&r=ecm 
By:  Franses, Ph.H.B.F. 
Abstract:  Forecasts in the airline industry are often based in part on statistical models but mostly on expert judgment. It is frequently documented in the forecasting literature that expert forecasts are biased but that their accuracy is higher than model forecasts. If an expert forecast can be approximated by the weighted sum of a part that can be replicated by an analyst and a nonreplicable part containing managerial intuition, the question arises which of two causes the bias. This paper advocates a simple regressionbased strategy to decompose bias in expert forecasts. An illustration of the method to a unique database on airline revenues shows how it can be used to improve their expertsâ€™ forecasts. 
Keywords:  expert forecasts;forecast bias;airline revenues 
Date:  2010–04–29 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureir:1765019359&r=ecm 
By:  CostaGomes, M.A.; Huck, S.; Weizsäcker, G. 
Abstract:  In many economic contexts, an elusive variable of interest is the agent's expectation about relevant events, e.g. about other agents' behavior. Recent experimental studies as well as surveys have asked participants to state their beliefs explicitly, but little is known about the causal relation between beliefs and other behavioral variables. This paper discusses the possibility of creating exogenous instrumental variables for belief statements, by shifting the probabilities of the relevant events. We conduct trust game experiments where the amount sent back by the second player (trustee) is exogenously varied by a random process, in a way that informs only the first player (trustor) about the realized variation. The procedure allows detecting causal links from beliefs to actions under plausible assumptions. The IV estimates indicate a signi ficant causal effect, comparable to the connection between beliefs and actions that is suggested by OLS analyses. 
Date:  2010–02 
URL:  http://d.repec.org/n?u=RePEc:ner:ucllon:http://eprints.ucl.ac.uk/19473/&r=ecm 
By:  McArthur, David Philip (Stord/Haugesund University College); Kleppe, Gisle (Stord/Haugesund University College); Thorsen, Inge (Stord/Haugesund University College); Ubøe, Jan (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration) 
Abstract:  This paper studies whether gravity model parameters estimated in one geographic area can give reasonable predictions of commuting flows in another. To do this, three sets of parameters are estimated for geographically proximate yet separate regions in southwest Norway. All possible combinations of data and parameters are considered, giving a total of nine cases. Of particular importance is the distinction between statistical equality of parameters and `practical' equality i.e. are the differences in predictions big enough to matter. A new type test best on the Standardised Root Mean Square Error (SRMSE) and Monte Carlo simulation is proposed and utilised. 
Keywords:  Gravity model; commuting flows; regional science 
JEL:  R10 R12 
Date:  2010–05–19 
URL:  http://d.repec.org/n?u=RePEc:hhs:nhhfms:2010_003&r=ecm 
By:  Jin Zhang; Dietmar Maringer 
Abstract:  Copulae provide investors with tools to model the dependency structure among financial products. The choice of copulae plays an important role in successful copula applications. However, selecting copulae usually relies on general goodnessoffit (GoF) tests which are independent of the particular financial problem. This paper ¯rst proposes a paircopulaGARCH model to construct the dependency structure and simulate the joint returns of five U.S. equites. It then discusses copula selection problem from the perspective of downside risk management with the socalled Dvine structure, which considers the JoeClayton copula and the Student t copula as building blocks for the vine paircopula decomposition. Value at risk, expected shortfall, and Omega function are considered as downside risk measures in this study. As an alternative to the traditional bootstrap approaches, the proposed paircopulaGARCH model provides simulated asset returns for generating future scenarios of portfolio value. It is found that, although the Student t paircopula system performs better than the JoeClayton system in a GoF test, the latter is able to provide the loss distributions which are more consistent with the empirically examined loss distributions while optimizing the Omega ratio. Furthermore, the economic benefit of using the paircopulaGARCH model is revealed by comparing the loss distributions from the proposed model and the conventional exponentially weighted moving average model of RiskMetrics in this case. 
Keywords:  Downside Risk, ARTGARCH, PairCopula, Vine Structure, Differential Evolution 
Date:  2010–05–17 
URL:  http://d.repec.org/n?u=RePEc:com:wpaper:037&r=ecm 
By:  Manner, Hans (Maastricht University) 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:ner:maastr:urn:nbn:nl:ui:2722415&r=ecm 
By:  Rodney W. Strachan; Herman K. van Dijk 
Abstract:  The empirical support for a real business cycle model with two technology shocks is evaluated using a Bayesian model averaging procedure. This procedure makes use of a finite mixture of many models within the class of vector autoregressive (VAR) processes. The linear VAR model is extended to permit cointegration, a range of deterministic processes, equilibrium restrictions and restrictions on longrun responses to technology shocks. We find support for a number of the features implied by the real business cycle model. For example, restricting long run responses to identify technology shocks has reasonable support and important implications for the short run responses to these shocks. Further, there is evidence that savings and investment ratios form stable relationships, but technology shocks do not account for all stochastic trends in our system. There is uncertainty as to the most appropriate model for our data, with thirteen models receiving similar support, and the model or model set used has significant implications for the results obtained. 
JEL:  C11 C32 C52 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:acb:cbeeco:2010522&r=ecm 