nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒10‒18
eighteen papers chosen by
Sune Karlsson
Orebro University

  1. Asymptotic Inference for Dynamic Panel Estimators of Innite Order Autoregressive Processes By Yoon-Jin Lee; Ryo Okui; Mototsugu Shintani
  2. Entropy testing for nonlinearity in time series By SIMONE GIANNERINI; ESFANDIAR MAASOUMI; ESTELA BEE DAGUM
  3. Testing for marginal asymmetry of weakly dependent processes By Marian Vavra
  4. Sign-based portmanteau test for ARCH-type models with heavy-tailed innovations By Chen, Min; Zhu, Ke
  5. Nonlife Ratemaking and Risk Management with Bayesian Additive Models for Location, Scale and Shape By Nadja Klein; Michel Denuit; Stefan Lang; Thomas Kneib
  6. Nowcasting causality in mixed frequency vector autoregressive models By Götz T.B.; Hecq A.W.
  7. Detecting and Forecasting Large Deviations and Bubbles in a Near-Explosive Random Coefficient Model By Banerjee, Anurag N.; Chevillon, Guillaume; Kratz, Marie
  8. A History of Polyvalent Structural Parameters: the Case of Instrument Variable Estimators By Duo Qin; Yanqun Zhang
  9. On the distribution of information in the moment structure of DSGE models By Nikolay Iskrev
  10. A SOLUTION TO AGGREGATION AND AN APPLICATION TO MULTIDIMENSIONAL `WELL-BEING` FRONTIERS By ESFANDIAR MAASOUMI; JEFFREY S. RACINE
  11. Is it a power law distribution? The case of economic contractions By Salvador Pueyo
  12. Revealed Preference Tests of Network Formation Models By Khai Xiang Chiong
  13. Forecasting aggregate demand: analytical comparison of top-down and bottom-up approaches in a multivariate exponential smoothing framework By Giacomo Sbrana; Andrea Silvestrini
  14. Econometric Issues when Modelling with a Mixture of I(1) and I(0) Variables By Lance A Fisher; Syeon-seung Huh; Adrian Pagan
  15. Fast methods for jackknifing inequality indices By Karoly, Lynn; Schröder, Carsten
  16. In the quest for economic significance: Assessing variable importance through mean value decomposition By Holgersson , Thomas; Norman, Therese; Tavassoli, Mohammad Hossein
  17. Order Estimates for the Exact Lugannani-Rice Expansion By Takashi Kato; Jun Sekine; Kenichi Yoshikawa
  18. Macro-Econometric System Modelling @75 By Tony Hall; Jan Jacobs; Adrian Pagan

  1. By: Yoon-Jin Lee (Department of Economics, Indiana University); Ryo Okui (Institute of Economic Research, Kyoto University); Mototsugu Shintani (Department of Economics, Vanderbilt University)
    Abstract: In this paper we consider the estimation of a dynamic panel autoregressive (AR) process of possibly innite order in the presence of individual effects. We utilize the sieve AR approximation with its lag order increasing with the sample size. We establish the consistency and asymptotic normality of the standard dynamic panel data estimators, including the xed effects estimator, the gen- eralized methods of moments estimator and Hayakawa's instrumental variables estimator, using double asymptotics under which both the cross-sectional sam- ple size and the length of time series tend to innity. We also propose a bias- corrected xed effects estimator based on the asymptotic result. Monte Carlo simulations demonstrate that the estimators perform well and the asymptotic approximation is useful. As an illustration, proposed methods are applied to dynamic panel estimation of the law of one price deviations among US cities.
    Keywords: Autoregressive Sieve Estimation, Bias Correction, Double Asymptotics, Fixed Effects Estimator, GMM, Instrumental Variables Estimator.
    JEL: C13 C23 C26
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:879&r=ecm
  2. By: SIMONE GIANNERINI; ESFANDIAR MAASOUMI; ESTELA BEE DAGUM
    Abstract: We propose a test for identification of nonlinear serial dependence in time series against the 15 general “null” of linearity, in contrast to the more widely examined null of “independence”. The approach is based on a combination of an entropy dependence metric, possessing many desirable properties and used as a test statistic, together with i) a suitable extension of surrogate data methods, a class of Monte Carlo distribution-free tests for nonlinearity; ii) the use of a smoothed sieve bootstrap scheme. We show how the tests can be employed to detect the lags at which a 20 significant nonlinear relationship is expected in the same fashion as the autocorrelation function is used for linear models. We prove the asymptotic validity of the procedures proposed and of the corresponding inferences. The small sample size performance of the tests is assessed through a simulation study. Applications to real data sets of different kinds are also presented.
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:emo:wp2003:1307&r=ecm
  3. By: Marian Vavra (National Bank of Slovakia)
    Abstract: This article addresses the issue of testing for asymmetry of the marginal law of weakly dependent processes. A modified quantile-based symmetry test is considered. The test has an intuitive interpretation, it is easy and fast to calculate, follows a standard limiting distribution, and much importantly, it is robust against weak dependence of observations and outliers. The finite sample performance of the robust test is examined via Monte Carlo experiments. An empirical application using economic indicators is provided as well.
    Keywords: marginal symmetry, sample quantiles, Monte Carlo experiments
    JEL: C12 C14 C15 C22
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:svk:wpaper:1022&r=ecm
  4. By: Chen, Min; Zhu, Ke
    Abstract: This paper proposes a sign-based portmanteau test for diagnostic checking of ARCH-type models estimated by the least absolute deviation approach. Under the strict stationarity condition, the asymptotic distribution is obtained. The new test is applicable for very heavy-tailed innovations with only finite fractional moments. Simulations are undertaken to assess the performance of the sign-based test, as well as a comparison with other two portmanteau tests. A real empirical example for exchange rates is given to illustrate the practical usefulness of the test.
    Keywords: ARCH-type model; heavy-tailed innovation; LAD estimator; model diagnostics; sign-based portmanteau test
    JEL: C1 C12
    Date: 2013–10–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:50487&r=ecm
  5. By: Nadja Klein; Michel Denuit; Stefan Lang; Thomas Kneib
    Abstract: Generalized additive models for location, scale and shape define a flexible, semi-parametric class of regression models for analyzing insurance data in which the exponential family assumption for the response is relaxed. This approach allows the actuary to include risk factors not only in the mean but also in other parameters governing the claiming behavior, like the degree of residual heterogeneity or the no-claim probability. In this broader setting, the Negative Binomial regression with cell-specific heterogeneity and the zero-inflated Poisson regression with cell-specific additional probability mass at zero are applied to model claim frequencies. Models for claim severities that can be applied either per claim or aggregated per year are also presented. Bayesian inference is based on efficient Markov chain Monte Carlo simulation techniques and allows for the simultaneous estimation of possible nonlinear effects, spatial variations and interactions between risk factors within the data set. To illustrate the relevance of this approach, a detailed case study is proposed based on the Belgian motor insurance portfolio studied in Denuit and Lang (2004).
    Keywords: overdispersed count data, mixed Poisson regression, zero-inflated Poisson, Negative Binomial, zero-adjusted models, MCMC, probabilistic forecasts
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:inn:wpaper:2013-24&r=ecm
  6. By: Götz T.B.; Hecq A.W. (GSBE)
    Abstract: This paper introduces the notion of nowcasting causality for mixed-frequency VARs as the mixed-frequency version of instantaneous causality. We analyze the relationship between nowcasting and Granger causality in the mixed-frequency VAR setting of Ghysels 2012 and illustrate that nowcasting causality can have a crucial impact on the significance of contemporaneous or lagged high-frequency variables in standard MIDAS regression models.
    Keywords: Single Equation Models; Single Variables: Cross-Sectional Models; Spatial Models; Treatment Effect Models; Quantile Regressions; Single Equation Models; Single Variables: Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models; Multiple or Simultaneous Equation Models: Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models;
    JEL: C21 C22 C32
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:dgr:umagsb:2013050&r=ecm
  7. By: Banerjee, Anurag N. (Durham University Business School); Chevillon, Guillaume (ESSEC Business School); Kratz, Marie (ESSEC Business School et Mathématiques appliquées Paris 5 (MAP5))
    Abstract: This paper proposes a Near Explosive Random-Coefficient autoregressive model for asset pricing which accommodates both the fundamental asset value and the recurrent presence of autonomous deviations or bubbles. Such a process can be stationary with or without fat tails, unit-root nonstationary or exhibit temporary exponential growth. We develop the asymptotic theory to analyze ordinary least-squares (OLS) estimation. One important theoretical observation is that the estimator distribution in the random coefficient model is qualitatively different from its distribution in the equivalent fixed coefficient model. We conduct recursive and full-sample inference by inverting the asymptotic distribution of the OLS test statistic, a common procedure in the presence of localizing parameters. This methodology allows to detect the presence of bubbles and establish probability statements on their apparition and devolution. We apply our methods to the study of the dynamics of the Case-Shiller index of U.S. house prices. Focusing in particular on the change in the price level, we provide an early detection device for turning points of booms and bust of the housing market.
    Keywords: Bubbles; Random Coefficient Autoregressive Model; Local Asymptotics; Asset Prices
    JEL: C22 C53 C58 G12
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:ebg:essewp:dr-13014&r=ecm
  8. By: Duo Qin (Department of Economics, SOAS, University of London, UK); Yanqun Zhang (IQTE,Chinese Academhy of Social Sciences, China)
    Abstract: This paper investigates the rise and fall of the IV method in macro-econometric models and its subsequent revival in micro-econometric models. The key findings are: (i) the IV method implicitly breaks the contemporaneously circular causality postulated in a simultaneousequation model (SEM) by redefining the conditional variable concerned as a suboptimal conditional expectation of it; (ii) the IV method falls out of favour in macro-econometrics mainly because of lack of empirical validations for such redefinitions; (iii) the IV method wins its popularity in micro-econometrics by its capacity to produce multiple suboptimal conditional bexpectations of the latent conditional variables of interest under the disguise of an SEM consistent estimator; nevertheless, (iv) such suboptimal conditional expectations give rise to the insurmountable difficulty of credibly interpreting the IV-based parameter estimates, especially in the case of prognosticated omitted variable bias. The findings highlight the methodological drawback of the estimator-centric strategy of textbook econometrics.
    Keywords: Instrumental variables, simultaneity, omitted variable bias, collinearity
    JEL: B23 C13 C18 C50
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:soa:wpaper:183&r=ecm
  9. By: Nikolay Iskrev (Bank of Portugal)
    Abstract: There is a long tradition in macroeconomics of using selected moments of the data to determine empirically relevant values of structural parameters. This paper presents a formal approach for evaluating the implications of DSGE models for the distribution of information in the moment structure of their variables. Specifically, it shows how to address the following questions: (1) what are the efficiency gains from using more instead of fewer moments; (2) what is the efficiency loss from assigning suboptimal weights on the used moments; and (3) which particular dimensions of the data - first and second order moments in the time domain, and sets of frequencies in the fre quency domain - are most informative about individual structural parameters. The analysis is based on the asymptotic properties of maximum likelihood and moment matching estimators and is simple to perform for general linearized models. A standard real business cycle model is used as an illustration.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:red:sed013:339&r=ecm
  10. By: ESFANDIAR MAASOUMI; JEFFREY S. RACINE
    Abstract: We propose a new technique for identication and estimation of aggregation functions in multidimensional evaluations and multiple indicator settings. These functions may represent \latent" objects. They occur in many dierent contexts, for instance in propensity scores, multivariate measures of well-being and the related analysis of inequality and poverty, and in equivalence scales. Technical advances allow nonparametric inference on the joint distribution of continuous and discrete indicators of well-being, such as income and health, conditional on joint values of other continuous and discrete attributes, such as edu- cation and geographical groupings. In a multiattribute setting, \quantiles" are \frontiers" that dene equivalent sets of covariate values. We identify these frontiers nonparametrically at rst. Then we suggest \parametrically equivalent" characterizations of these frontiers that reveal likely weights for, and substitutions between dierent attributes for dierent groups, and at dierent quantiles. These estimated parametric functionals are \ideal" ag- gregators in a certain sense which we make clear. They correspond directly to measures of aggregate well-being popularized in the earliest multidimensional inequality measures in Maasoumi (1986). This new approach resolves a classic problem of assigning weights to multiple indicators such as dimensions of well-being, as well as empirically incorporating the key component in multidimensional analysis, the relationship between the indicators. It introduces a new way for robust estimation of \quantile frontiers", allowing \complete" assessments, such as multidimensional poverty measurements. In our substantive applica- tion, we discover extensive heterogeneity in individual evaluation functions. This leads us to perform robust, weak uniform rankings as aorded by tests for multivariate stochas- tic dominance. A demonstration is provided based on the Indonesian data analyzed for multidimensional poverty in Maasoumi & Lugo (2008).
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:emo:wp2003:1306&r=ecm
  11. By: Salvador Pueyo
    Abstract: One of the first steps to understand and forecast economic downturns is identifying their frequency distribution, but it remains uncertain. This problem is common in phenomena displaying power-law-like distributions. Power laws play a central role in complex systems theory; therefore, the current limitations in the identification of this distribution in empirical data are a major obstacle to pursue the insights that the complexity approach offers in many fields. This paper addresses this issue by introducing a reliable methodology with a solid theoretical foundation, the Taylor Series-Based Power Law Range Identification Method. When applied to time series from 39 countries, this method reveals a well-defined power law in the relative per capita GDP contractions that span from 5.53% to 50%, comprising 263 events. However, this observation does not suffice to attribute recessions to some specific mechanism, such as self-organized criticality. The paper highlights a set of points requiring more study so as to discriminate among models compatible with the power law, as needed to develop sound tools for the management of recessions.
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1310.2567&r=ecm
  12. By: Khai Xiang Chiong (Division of the Humanities and Social Sciences, California Institute of Technology)
    Abstract: This paper proposes a revealed preference test of network formation models. Specifically, I consider network formation models where agents are (1) strategic, (2) externalities are confined to within an agent’s k-neighborhood, where k can be varied. I show that this model can be tested using observation of a single network. I then derive necessary and sufficient condition under which the observed network is consistent with our strategic models of network formation. This non-parametric test takes the form of an algorithm involving the computation of color-preserving automorphisms of graphs. Building on the theoretical result, the test is implemented to calculate its’ statistical power and to the Banerjee et al. (2012)’s social network data.
    Keywords: Revealed preference, Networks formation, Social networks, Pair- wise stability, Model testing, Testable implications, Graph automorphism
    JEL: C14 C52 C72 D85
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:net:wpaper:1323&r=ecm
  13. By: Giacomo Sbrana (Rouen Business School); Andrea Silvestrini (Bank of Italy)
    Abstract: Forecasting aggregate demand is a crucial matter in all industrial sectors. In this paper, we provide the analytical prediction properties of top-down (TD) and bottom-up (BU) approaches when forecasting aggregate demand, using multivariate exponential smoothing as demand planning framework. We extend and generalize the results obtained by Widiarta, Viswanathan and Piplani (2009) by employing an unrestricted multivariate framework allowing for interdependency between the variables. Moreover, we establish the necessary and sufficient condition for the equality of mean squared errors (MSEs) of the two approaches. We show that the condition for the equality of MSEs also holds even when the moving average parameters of the individual components are not identical. In addition, we show that the relative forecasting accuracy of TD and BU depends on the parametric structure of the underlying framework. Simulation results confirm our theoretical findings. Indeed, the ranking of TD and BU forecasts is led by the parametric structure of the underlying data generation process, regardless of possible misspecification issues.
    Keywords: top-down and bottom-up forecasting, multivariate exponential smoothing.
    JEL: C32 C43
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_929_13&r=ecm
  14. By: Lance A Fisher (Macquarie University); Syeon-seung Huh (Yonsei University); Adrian Pagan (University of Sydney)
    Abstract: This paper considers structural models when both I(1) and I(0) variables are present. It is necessary to extend the traditional classification of shocks as permanent and transitory, and we do this by introducing a mixed shock. The extra shocks coming from introducing I(0) variables into a system are then classified as either mixed or transitory. Conditions are derived upon the nature of the SVAR in the event that these extra shocks are transitory. We then analyse what happens when there are mixed shocks, finding that it changes a number of ideas that have become established from the cointegration literature. The ideas are illustrated using a well-known SVAR where there are mixed shocks. This SVAR is re-formulated so that the extra shocks coming from the introduction of I(0) variables do not affect relative prices in the long-run and it is found that this has major implications for whether there is a price puzzle. It is also shown how to handle long-run parametric restrictions when some shocks are identified using sign restrictions.
    Keywords: Mixed models, transitory shocks, mixed shocks, long-run restrictions, sign restrictions, instrumental variables
    JEL: C32 C36 C51
    Date: 2013–10–09
    URL: http://d.repec.org/n?u=RePEc:qut:auncer:2013_9&r=ecm
  15. By: Karoly, Lynn; Schröder, Carsten
    Abstract: The jackknife is a resampling method that uses subsets of the original database by leaving out one observation at a time from the sample. The paper outlines a procedure to obtain jackknife estimates for several inequality indices with only a few passes through the data. The number of passes is independent of the number of observations. Hence, the method provides an efficient way to obtain standard errors of the estimators even if sample size is large. We apply our method using micro data on individual incomes for Germany and the US. --
    Keywords: Jackknife,Resampling,Sampling Variability,Inequality
    JEL: C81 C87 D3
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:201304&r=ecm
  16. By: Holgersson , Thomas (Jönköping International Business School, and Linnaeus University); Norman, Therese (Jönköping International Business School); Tavassoli, Mohammad Hossein (Blekinge Institute of Technology)
    Abstract: Economic significance is frequently assessed through statistical hypothesis testing. This habitual use is, however, usually not matching with the implicit economical questions being addressed. In this paper we propose using mean value decomposition to assess economic significance. Unlike most previously suggested methods the proposed one is intuitive and simple to conduct. The technique is demonstrated and contrasted with hypothesis tests by an empirical example involving the income of Mexican children, which shows that the two inference approaches provide different and supplementary pieces of information.
    Keywords: Conditioning; Economic significance; Regression analysis; Mean Value Decomposition; Goodness-of-Fit
    JEL: C51 C54 I32
    Date: 2013–10–11
    URL: http://d.repec.org/n?u=RePEc:hhs:cesisp:0326&r=ecm
  17. By: Takashi Kato; Jun Sekine; Kenichi Yoshikawa
    Abstract: The Lugannani-Rice formula is a saddlepoint approximation method for estimating the tail probability distribution function, which was originally studied for the sum of independent identically distributed random variables. Because of its tractability, the formula is now widely used in practical financial engineering as an approximation formula for the distribution of a (single) random variable. In this paper, the Lugannani-Rice approximation formula is derived for a general, parametrized sequence of random variables and the order estimates of the approximation are given.
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1310.3347&r=ecm
  18. By: Tony Hall; Jan Jacobs; Adrian Pagan
    Abstract: We summarize the history of macroeconometric system modelling as having produced four generations of models. Over time the principles underlying the model designs have been extended to incorporate eight major features. Because models often evolve in response to external events we are led to ask what has happened to models used in the policy process since the financial crisis on 2008/9. We find that models have become smaller but that there is still no standard way of capturing the effects of such a crisis
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2013-67&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.