nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒06‒18
seventeen papers chosen by
Sune Karlsson
Orebro University

  1. A New Procedure For Multiple Testing Of Econometric Models By Maxwell L. King; Xibin Zhang; Muhammad Akram
  2. Conditional quantile processes based on series or many regressors By Alexandre Belloni; Victor Chernozhukov; Ivan Fernandez-Val
  3. Inference for VARs identified with sign restrictions By Hyungsik Roger Moon; Frank Schorfheide; Eleonara Granziera; Mihye Lee
  4. Bounding quantile demand functions using revealed preference inequalities By Richard Blundell; Dennis Kristensen; Rosa Matzkin
  5. An estimator for the quadratic covariation of asynchronously observed Itô processes with noise: Asymptotic distribution theory By Markus Bibinger
  6. Covariate Unit Root Tests with Good Size and Power By Fossati, Sebastian
  7. Asymptotics of Asynchronicity By Markus Bibinger
  8. Bayesian Estimation of Discrete Games of Complete Information By Narayanan, Sridhar
  9. Detecting big structural breaks in large factor models By Chen, Liang; Dolado, Juan Jose; Gonzalo, Jesus
  10. A survey of functional principal component analysis By Han Lin Shang
  11. Nonparametric Rank Tests for Non-stationary Panels By Pedroni, Peter; Vogelsang, Timothy J.; Wagner, Martin; Westerlund, Joakim
  12. Long Memory Process in Asset Returns with Multivariate GARCH innovations By Imene Mootamri
  13. Dating U.S. Business Cycles with Macro Factors By Fossati, Sebastian
  14. Unobserved Heterogeneity in Multiple-Spell Multiple-States Duration Models By Bijwaard, Govert
  15. APPLICATION OF NEURAL NETWORKS IN PREDICTIVE DATA MINING By Saratha Sathasivam
  16. A simple decomposition of the variance of output growth across countries By Christopher Reicher
  17. Causal misspecifications in econometric models By Itkonen, Juha

  1. By: Maxwell L. King; Xibin Zhang; Muhammad Akram
    Abstract: A significant role for hypothesis testing in econometrics involves diagnostic checking. When checking the adequacy of a chosen model, researchers typically employ a range of diagnostic tests, each of which is designed to detect a particular form of model inadequacy. A major problem is how best to control the overall probability of rejecting the model when it is true and multiple test statistics are used. This paper presents a new multiple testing procedure, which involves checking whether the calculated values of the diagnostic statistics are consistent with the postulated model being true. This is done through a combination of bootstrapping to obtain a multivariate kernel density estimator of the joint density of the test statistics under the null hypothesis and Monte Carlo simulations to obtain a p value using this kernel density. We prove that under some regularity conditions, the estimated p value of our test procedure is a consistent estimate of the true p value. The proposed testing procedure is applied to tests for autocorrelation in an observed time series, for normality, and for model misspecification through the information matrix. We find that our testing procedure has correct or nearly correct sizes and good powers, particular for more complicated testing problems. We believe it is the first good method for calculating the overall p value for a vector of test statistics based on simulation.
    Keywords: Bootstrapping, consistency, information matrix test, Markov chain Monte Carlo simulation, multivariate kernel density, normality, serial correlation, test vector
    Date: 2011–05–25
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2011-7&r=ecm
  2. By: Alexandre Belloni; Victor Chernozhukov (Institute for Fiscal Studies and Massachusetts Institute of Technology); Ivan Fernandez-Val
    Abstract: <p><p><p>Quantile regression (QR) is a principal regression method for analyzing the impact of covariates on outcomes. The impact is described by the conditional quantile function and its functionals. In this paper we develop the nonparametric QR series framework, covering many regressors as a special case, for performing inference on the entire conditional quantile function and its linear functionals. In this framework, we approximate the entire conditional quantile function by a linear combination of series terms with quantile-specific coefficients and estimate the function-valued coefficients from the data. We develop large sample theory for the empirical QR coefficient process, namely we obtain uniform strong approximations to the empirical QR coefficient process by conditionally pivotal and Gaussian processes, as well as by gradient and weighted bootstrap processes.</p> </p><p></p><p><p>We apply these results to obtain estimation and inference methods for linear functionals of the conditional quantile function, such as the conditional quantile function itself, its partial derivatives, average partial derivatives, and conditional average partial derivatives. Specifically, we obtain uniform rates of convergence, large sample distributions, and inference methods based on strong pivotal and Gaussian approximations and on gradient and weighted bootstraps. All of the above results are for function-valued parameters, holding uniformly in both the quantile index and in the covariate value, and covering the pointwise case as a by-product. If the function of interest is monotone, we show how to use monotonization procedures to improve estimation and inference. We demonstrate the practical utility of these results with an empirical example, where we estimate the price elasticity function of the individual demand for gasoline, as indexed by the individual unobserved propensity for gasoline consumption.</p></p></p>
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:19/11&r=ecm
  3. By: Hyungsik Roger Moon; Frank Schorfheide; Eleonara Granziera; Mihye Lee
    Abstract: There is a fast growing literature that partially identifies structural vector autoregressions (SVARs) by imposing sign restrictions on the responses of a subset of the endogenous variables to a particular structural shock (sign-restricted SVARs). To date, the methods that have been used are only justified from a Bayesian perspective. This paper develops methods of constructing error bands for impulse response functions of sign-restricted SVARs that are valid from a frequentist perspective. The authors also provide a comparison of frequentist and Bayesian error bands in the context of an empirical application — the former can be twice as wide as the latter.
    Keywords: Vector autoregression ; Econometric models
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:fip:fedpwp:11-20&r=ecm
  4. By: Richard Blundell (Institute for Fiscal Studies and University College London); Dennis Kristensen; Rosa Matzkin (Institute for Fiscal Studies and UCLA)
    Abstract: <p>This paper develops a new technique for the estimation of consumer demand models with unobserved heterogeneity subject to revealed preference inequality restrictions. Particular attention is given to nonseparable heterogeneity. The inequality restrictions are used to identify bounds on quantile demand functions. A nonparametric estimator for these bounds is developed and asymptotic properties are derived. An empirical application using data from the U.K. Family Expenditure Survey illustrates the usefulness of the methods by deriving bounds and confidence sets for estimated quantile demand functions.</p>
    Date: 2011–06
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:21/11&r=ecm
  5. By: Markus Bibinger
    Abstract: The article is devoted to the nonparametric estimation of the quadratic covariation of non-synchronously observed Itô processes in an additive microstructure noise model. In a high-frequency setting, we aim at establishing an asymptotic distribution theory for a generalized multiscale estimator including a feasible central limit theorem with optimal convergence rate on convenient regularity assumptions. The inevitably remaining impact of asynchronous deterministic sampling schemes and noise corruption on the asymptotic distribution is precisely elucidated. A case study for various important examples, several generalizations of the model and an algorithm for the implementation warrant the utility of the estimation method in applications.
    Keywords: non-synchronous observations, microstructure noise, integrated covolatility, multiscale estimator, stable limit theorem
    JEL: C14 C32 G10
    Date: 2011–06
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2011-034&r=ecm
  6. By: Fossati, Sebastian (University of Alberta, Department of Economics)
    Abstract: The selection of the truncation lag for covariate unit root tests is analyzed using Monte Carlo simulation. It is shown that standard information criteria such as the BIC or the AIC can result in tests with large size distortions. Modifi ed information criteria can be used to construct tests with good size and power. An empirical illustration is provided.
    Keywords: unit root tests; truncation lag; information criteria; vector autoregressions
    JEL: C12 C32 C52
    Date: 2011–05–01
    URL: http://d.repec.org/n?u=RePEc:ris:albaec:2011_004&r=ecm
  7. By: Markus Bibinger
    Abstract: In this article we focus on estimating the quadratic covariation of continuous semimartingales from discrete observations that take place at asynchronous observation times. The Hayashi-Yoshida estimator serves as synchronized realized covolatility for that we give our own distinct illustration based on an iterative synchronization algorithm. We consider high-frequency asymptotics and prove a feasible stable central limit theorem. The characteristics of non-synchronous observation schemes affecting the asymptotic variance are captured by a notion of asymptotic covariations of times. These are precisely illuminated and explicitly deduced for the important case of independent time-homogeneous Poisson sampling.
    Keywords: non-synchronous observations, quadratic covariation, Hayashi-Yoshida estimator, stable limit theorem, asymptotic distribution
    JEL: C14 C32 G10
    Date: 2011–06
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2011-033&r=ecm
  8. By: Narayanan, Sridhar (Stanford University)
    Abstract: Discrete games of complete information have been used to analyze a variety of contexts such as market entry, technology adoption and peer effects. They are extensions of discrete choice models, where payoffs of each player are dependent on actions of other players, and each outcome is modeled as Nash equilibria of a game, where the players share common knowledge about the payoffs of all the players in the game. An issue with such games is that they typically have multiple equilibria, leading to the absence of a one-to-one mapping between parameters and outcomes. Theory typically has little to say about equilibrium selection in these games. Researchers have therefore had to make simplifying assumptions, either analyzing outcomes that do not have multiplicity, or making ad-hoc assumptions about equilibrium selection. Another approach has been to use a bounds approach to set identify rather than point identify the parameters. A third approach has been to empirically estimate the equilibrium selection rule. In this paper, we take a Bayesian MCMC approach to estimate the parameters of the payoff functions in such games. Instead of making ad-hoc assumptions on equilibrium selection, we specify a prior over the possible equilibria, reflecting the analyst's uncertainty about equilibrium selection and find posterior estimates for the parameters that accounts for this uncertainty. We develop a sampler using the reversible jump algorithm to navigate the parameter space corresponding to multiple equilibria and obtain posterior draws whose marginal distributions are potentially multi-modal. When the equilibria are not identified, it goes beyond the bounds approach by providing posterior distributions of parameters, which may be important given that there are likely regions of low density for the parameters within the bounds. When data allow us to identify the equilibrium, our approach generates posterior estimates of the probability of specific equilibria, jointly with the estimates for the parameters. Our approach can also be cast in a hierarchical framework, allowing not just for heterogeneity in parameters, but also in equilibrium selection. Thus, it complements and extends the existing literature on dealing with multiplicity in discrete games. We first demonstrate the methodology using simulated data, exploring the methodology in depth. We then present two empirical applications, one in the context of joint consumption, using a dataset of casino visit decisions by married couples, and the second in the context of market entry by competing chains in the retail stationery market. We show the importance of accounting for multiple equilibria in these application, and demonstrate how inferences can be distorted by making the typically used equilibrium selection assumptions. Our applications show that it is important for empirical researchers to take the issue of multiplicity of equilibria seriously, and that taking an empirical approach to the issue, such as the one we have demonstrated, can be very useful.
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:ecl:stabus:2079&r=ecm
  9. By: Chen, Liang; Dolado, Juan Jose; Gonzalo, Jesus
    Abstract: Constant factor loadings is a standard assumption in the analysis of large dimensional factor models. Yet, this assumption may be restrictive unless parameter shifts are mild. In this paper we develop a new testing procedure to detect big breaks in factor loadings at either known or unknown dates. It is based upon testing for structural breaks in a regression of the first of the ¯r factors estimated by PC for the whole sample on the remaining r−1 factors, where r is chosen using Bai and Ng´s (2002) information criteria. We argue that this test is more powerful than other tests available in the literature on this issue.
    Keywords: structural break; large factor model
    JEL: C12 C33
    Date: 2011–06–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:31344&r=ecm
  10. By: Han Lin Shang
    Abstract: Advances in data collection and storage have tremendously increased the presence of functional data, whose graphical representations are curves, images or shapes. As a new area of Statistics, functional data analysis extends existing methodologies and theories from the fields of functional analysis, generalized linear models, multivariate data analysis, nonparametric statistics and many others. This paper provides a review into functional data analysis with main emphasis on functional principal component analysis, functional principal component regression, and bootstrap in functional principal component regression. Recent trends as well as open problems in the area are discussed.
    Keywords: Bootstrap, functional principal component regression, functional time series, Stiefel manifold, Von Mise-Fisher distribution.
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2011-6&r=ecm
  11. By: Pedroni, Peter (Williams College, Williamstown, USA); Vogelsang, Timothy J. (Department of Economics, Michigan State University, East Lansing, USA); Wagner, Martin (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria, and Frisch Centre for Economic Research, Oslo); Westerlund, Joakim (Department of Economics, University of Gothenburg, Sweden)
    Abstract: This study develops new rank tests for panels that include panel unit root tests as a special case. The tests are unusual in that they can accommodate very general forms of both serial and cross-sectional dependence, including cross-unit cointegration, without the need to specify the form of dependence or estimate nuisance parameters associated with the dependence. The tests retain high power in small samples, and in contrast to other tests that accommodate cross-sectional dependence, the limiting distributions are valid for panels with finite cross-sectional dimensions.
    Keywords: Nonparametric rank tests, unit roots, cointegration, cross-sectional dependence
    JEL: C12 C22 C23
    Date: 2011–06
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:270&r=ecm
  12. By: Imene Mootamri (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579)
    Abstract: The main purpose of this paper is to consider the multivariate GARCH (MGARCH) framework to model the volatility of a multivariate process exhibiting long term dependence in stock returns. More precisely, the long term dependence is examined in the …first conditional moment of US stock returns through multivariate ARFIMA process and the time-varying feature of volatility is explained by MGARCH models. An empirical application to the returns series is carried out to illustrate the usefulness of our approach. The main results confi…rm the presence of long memory property in the conditional mean of all stock returns.
    Keywords: Forecasting; Long memory; Multivariate GARCH; Stock Returns
    Date: 2011–06–09
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-00599250&r=ecm
  13. By: Fossati, Sebastian (University of Alberta, Department of Economics)
    Abstract: A probit model is used to show that latent common factors estimated by principal components from a large number of macroeconomic time series have important predictive power for NBER recession dates. A pseudo out-of-sample forecasting exercise shows that predicted recession probabilities consistently rise during subsequently declared NBER recession dates. The latent variable in the factor-augmented probit model is interpreted as an index of real business conditions which can be used to assess the strength of an expansion or the depth of a recession.
    Keywords: business cycle; forecasting; factors; probit model; Bayesian methods
    JEL: C01 C22 C25 E32 E37
    Date: 2011–05–01
    URL: http://d.repec.org/n?u=RePEc:ris:albaec:2011_005&r=ecm
  14. By: Bijwaard, Govert (NIDI - Netherlands Interdisciplinary Demographic Institute)
    Abstract: In survival analysis a large literature using frailty models, or models with unobserved heterogeneity, exist. In the growing literate on multiple spell multiple states duration models, or multistate models, modeling this issue is only at its infant phase. Ignoring unobserved heteogeneity can, however, produce incorrect results. This paper presents how unobserved heterogeneity can be incorporated into multistate models, with an emphasis on semi-Markov multistate models with a mixed proportional hazard structure. First, the aspects of frailty modeling in univariate (proportional hazard, Cox) duration models are addressed and some important models with unobserved heterogeneity are discussed. Second, the domain is extended to modeling of parallel/clustered multivariate duration data with unobserved heterogeneity. The implications of choosing shared or correlated unobserved heterogeneity is highlighted. The relevant differences with recurrent events data is covered next. They include the choice of the time scale and risk set which both have important implications for the way unobserved heterogeneity influence the model. Multistate duration models can have both parallel and recurrent events. Incorporating unobserved heterogeneity in multistate models, therefore, brings all the previously addressed issues together. Although some estimation procedures are covered the emphasis is on conceptual issues. The importance of including unobserved heterogeneity in multistate duration models is illustrated with data on labour market and migration dynamics of recent immigrants to The Netherlands.
    Keywords: multiple spell multiple state duration, mixed proportional hazard, multistate model, unobserved heterogeneity, frailty
    JEL: C41 J61
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp5748&r=ecm
  15. By: Saratha Sathasivam (School of Mathematical Sciences, Universiti Sains Malaysia)
    Abstract: Neural Networks represent a meaningfully different approach to using computers in the workplace. A neural network is used to learn patterns and relationships in data. The data may be the results of a market research effort, or the results of a production process given varying operational conditions. Regardless of the specifics involved, applying a neural network is a substantial departure from traditional approaches. In this paper we will look into how neural networks is used in data mining. The ultimate goal of data mining is prediction - and predictive data mining is the most common type of data mining and one that has the most direct business applications. Therefore, we will consider how this technique can be used to classify the performance status of a departmental store in monitoring their products
    Keywords: Neural networks, data mining, prediction
    JEL: M0
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:cms:2icb11:2011-137&r=ecm
  16. By: Christopher Reicher
    Abstract: This paper outlines a simple regression-based method to decompose the variance of an aggregate time series into the variance of its components, which is then applied to measure the relative contributions of productivity, hours per worker, and employment to cyclical output growth across a panel of countries. Measured productivity contributes more to the cycle in Europe and Japan than in the United States. Employment contributes the largest proportion of the cycle in Europe and the United States (but not Japan), which is inconsistent with the idea that higher levels of employment protection in Europe dampen cyclical employment fluctuations
    Keywords: Intensive margin, extensive margin, productivity, business cycles, variance decomposition
    JEL: C32 E24 E32
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:kie:kieliw:1703&r=ecm
  17. By: Itkonen, Juha
    Abstract: We bridge together the graph-theoretic and the econometric approach for defining causality in statistical models to consider model misspecification problems. By presenting a solution to disagreements between the existing frameworks, we build a causal framework that allows us to express causal implications of econometric model specifications. This allows us to reveal possible inconsistencies in models used for policy analysis. In particular, we show how a common practice of doing policy analysis with vector error-correction models fails. As an example, we apply these concepts to discover fundamental flaws in a resent strand of literature estimating the carbon Kuznetz curve, which postulates that carbon dioxide emissions initially increase with economic growth but that the relationship is eventually reversed. Due to a causal misspecification, the compatibility between climate and development policy goals is overstated.
    Keywords: Causality; Policy evaluations; Energy consumption; Carbon dioxide emissions; Economic growth; Environmental Kuznets curve.
    JEL: C50 Q54 Q43
    Date: 2011–05–27
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:31397&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.