nep-ecm New Economics Papers
on Econometrics
Issue of 2005‒10‒15
eighteen papers chosen by
Sune Karlsson
Orebro University

  1. On Tail Index Estimation Using Dependent,Heterogenous Data By Jonathan B. Hill
  2. "Scanning Multivariate Conditional Densities with Probability Integral Transforms" By Isao Ishida
  3. Bayesian Inference of General Linear Restrictions on the Cointegration Space By Villani, Mattias
  4. Forecasting Performance of an Open Economy Dynamic Stochastic General Equilibrium Model By Adolfson, Malin; Lindé, Jesper; Villani, Mattias
  5. "Methods for Improvement in Estimation of a Normal Mean Matrix" By Hisayuki Tsukuma; Tatsuya Kubokawa
  6. Pooling-based Data Interpolation and Backdating By Massimiliano Marcellino
  7. Testing for Separation in Agricultural Household Models and Unobservable Individual Effects: A Note By Jean-Lois Arcand; Béatrice d'Hombres
  8. Modern Forecasting Models in Action: Improving Macroeconomic Analyses at Central Banks By Adolfson, Malin; Andersson, Michael K.; Lindé, Jesper; Villani, Mattias; Vredin, Anders
  9. Compositional Time Series: Past and Present By Juan M.C. Larrosa
  10. Power indices for revealed preference tests By Andreoni,J.; Harbaugh,W.T.
  11. Demand Forecasting: Evidence-based Methods By J. Scott Armstrong; Kesten C. Green
  12. Identification of binary choice models with social interactions By Brock,W.A.; Durlauf,S.N.
  13. Forecasting with real-time macroeconomic data: the ragged-edge problem and revisions By Bouwman, Kees E.; Jacobs, Jan P.A.M.
  14. Evaluating a Central Bank’s Recent Forecast Failure By Nymoen, Ragnar
  15. Markov Forecasting Methods for Welfare Caseloads By Jeffrey Grogger
  16. "On a limiting quasi-multinomial distribution" By Nobuaki Hoshino
  17. Bayes and Gravity By Ranjan, Priya; Tobias, Justin
  18. Growth econometrics By Durlauf,S.N.; Johnson,P.A.; Temple,J.R.W.

  1. By: Jonathan B. Hill (Department of Economics, Florida International University)
    Abstract: In this paper we analyze the asymptotic properties of the popularly used distribution tail estimator by B. Hill (1975), for heavy-tailed heterogenous, dependent processes. We prove the Hill estimator is weakly consistent for functionals of mixingales and L1-approximable processes with regularly varying tails, covering ARMA, GARCH, and many IGARCH and FIGARCH processes. Moreover, for functionals of processes near epoch-dependent on a mixing process, we prove a Gaussian distribution limit exists. In this case, as opposed to all existing prior results in the literature, we do not require the limiting variance of the Hill estimator to be bounded, and we develop a Newey-West kernel estimator of the variance. We expedite the theory by defining "extremal mixingale" and "extremal NED" properties to hold exclusively in the extreme distribution tails, disbanding with dependence restrictions in the non-extremal support, and prove a broad class of linear processes are extremal NED. We demonstrate that for greater degrees of serial dependence more tail information is required in order to ensure asymptotic normality, both in theory and practice.
    Keywords: Hill estimator, regular variation, infinite variance, near epoch dependence, mixingales
    JEL: C12 C16 C52
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:fiu:wpaper:0512&r=ecm
  2. By: Isao Ishida (Faculty of Economics, University of Tokyo)
    Abstract: This paper introduces new ways to construct probability integral transforms of random vectors that complement the approach of Diebold, Hahn, and Tay (1999) for evaluating multivariate conditional density forecasts. Our approach enables us to "scan" multivariate densities in various di.erent ways. A simple bivariate normal example is given that illustrates how "scanning" a multivariate density from particular angles leads to tests with no power or high power. An empirical example is also given that applies several di.erent probability integral transforms to specification testing of Engle's (2002) dynamic conditional correlation model for multivariate financial returns time series with multivariate normal and t errors.
    Date: 2005–09
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2005cf369&r=ecm
  3. By: Villani, Mattias (Research Department, Central Bank of Sweden)
    Abstract: The degree of empirical support of a priori plausible structures on the cointegration vectors has a central role in the analysis of cointegration. Villani (2000) and Strachan and van Dijk (2003) have recently proposed finite sample Bayesian procedures to calculate the posterior probability of restrictions on the cointegration space, using the existence of a uniform prior distribution on the cointegration space as the key ingredient. The current paper extends this approach to the empirically important case with different restrictions on the individual cointegration vectors. Prior distributions are proposed and posterior simulation algorithms are developed. Consumers' expenditure data for the US is used to illustrate the robustness of the results to variations in the prior. A simulation study shows that the Bayesian approach performs remarkably well in comparison to other more established methods for testing restrictions on the cointegration vectors.
    Keywords: Bayesian inference; Cointegration; Posterior probability; Restrictions.
    JEL: C11 C12
    Date: 2005–09–01
    URL: http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0189&r=ecm
  4. By: Adolfson, Malin (Research Department, Central Bank of Sweden); Lindé, Jesper (Research Department, Central Bank of Sweden); Villani, Mattias (Research Department, Central Bank of Sweden)
    Abstract: This paper analyzes the forecasting performance of an open economy DSGE model, estimated with Bayesian methods, for the Euro area during 1994Q1-2002Q4. We compare the DSGE model and a few variants of this model to various reduced form forecasting models such as several vector autoregressions (VAR), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g. the random walk. The accuracy of the point forecasts are assessed in a traditional out-of-sample rolling event evaluation using several univariate and multivariate measures. Forecast intervals are evaluated in different ways and the log predictive score is used to summarize the precision in the joint forecast distribution as a whole. We also discuss the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g. the log determinant statistic, as measures of overall forecasting performance.
    Keywords: Bayesian inference; Forecasting; Open economy DSGE model; Vector autoregressive models
    JEL: C11 C32 E37 E47
    Date: 2005–09–01
    URL: http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0190&r=ecm
  5. By: Hisayuki Tsukuma (Faculty of Economics, University of Tokyo); Tatsuya Kubokawa (Faculty of Economics, University of Tokyo)
    Abstract: This paper is concerned with the problem of estimating a matrix of means in multivariate normal distributions with an unknown covariance matrix under the quadratic loss function. It is first shown that the modified Efron-Morris estimator is characterized as certain empirical Bayes estimator. This estimator modifies the crude Efron-Morris estimator by adding a scalar shrinkage term. It is next shown that the idea of this modification provides the general method for improvement of estimators, which results in the further improvement of several minimax estimators including the Stein, Dey and Haff estimators. As a new method for improvement, a random combination of the modified Stein and the James-Stein estimators is also proposed and is shown to be minimax. Through Monte Carlo studies for the risk behaviors, it is numerically shown that the proposed, combined estimator inherits the nice risk properties of both individual estimators and thus it has a very favorable risk behavior in a small sample case.
    Date: 2005–09
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2005cf378&r=ecm
  6. By: Massimiliano Marcellino
    Abstract: Pooling forecasts obtained from different procedures typically reduces the mean square forecast error and more generally improves the quality of the forecast. In this paper we evaluate whether pooling interpolated or backdated time series obtained from different procedures can also improve the quality of the generated data. Both simulation results and empirical analyses with macroeconomic time series indicate that pooling plays a positive and important role also in this context.
    URL: http://d.repec.org/n?u=RePEc:igi:igierp:299&r=ecm
  7. By: Jean-Lois Arcand; Béatrice d'Hombres
    Abstract: When market structure is complete, factor demands by households will be independent of their characteristics, and households will take their production decisions as if they were profit-maximizing firms. This observation constitutes the basis for one of the most popular empirical tests for complete markets, commonly known as the 'separation' hypothesis. In this paper, we show that all existing tests for separation using panel data are potentially biased towards rejecting the null-hypothesis of complete markets, because of the failure to adequately control for unobservable individual effects. Since the variable on which the test for separation is based cannot be identified in most panel datasets following the usual covariance transformations, and is likely to be correlated with the individual effect, neither the within nor the variance-components procedures are able to solve the problem. We show that the Hausman-Taylor (1981) estimator, in which the impact of covariates that are invariant along one dimension of a panel can be identified through the use of covariance transformations of other included variables that are orthogonal to the individual effects as instruments, provides a simple solution. We furnish an empirical illustration in which separation —and thus the null of complete markets— is strongly rejected using the standard approach, but is not rejected once correlated unobservable individual effects are controlled for using the Hausman-Taylor instrument set.
    Keywords: Panel data, individual effects, household models, testing for incomplete markets, development microeconomics, Tunisia
    JEL: D1 D2 D3 D4
    Date: 2005–10–11
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpmi:0510007&r=ecm
  8. By: Adolfson, Malin (Research Department, Central Bank of Sweden); Andersson, Michael K. (Monetary Policy Department, Central Bank of Sweden); Lindé, Jesper (Research Department, Central Bank of Sweden); Villani, Mattias (Research Department, Central Bank of Sweden); Vredin, Anders (Monetary Policy Department, Central Bank of Sweden)
    Abstract: There are many indications that formal methods based on economic research are not used to their full potential by central banks today. For instance, Christopher Sims published a review in 2002 where he argued that central banks use models that ”are now fit to data by ad hoc procedures that have no grounding in statistical theory”. There is no organized resistance against formal models at central banks, but the proponents of such models have not always been able to present convincing evidence of the models’ advantages. In this paper we demonstrate how BVAR and DSGE models can be used to shed light on questions that policy makers deal with in practice. We compare the forecast performance of BVAR and DSGE models with the Riksbank’s official, more subjective forecasts. We also use the models to interpret the low inflation rate in Sweden in 2003 - 2004.
    Keywords: Bayesian inference; DSGE models; Forecasting; Monetary policy; Subjective forecasting; Vector autoregressions
    JEL: E37 E47 E52
    Date: 2005–09–01
    URL: http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0188&r=ecm
  9. By: Juan M.C. Larrosa (CONICET-Universidad Nacional del Sur)
    Abstract: This survey reviews diverse academic production on compositional dynamic series analysis. Although time dimension of compositional series has been little investigated, this kind of data structure is widely available and utilized in social sciences research. This way, a review of the state-of-the-art on this topic is required for scientist to understand the available options. The review comprehends the analysis of several techniques like autoregresive integrate moving average (ARIMA) analysis, compositional vector autoregression systems (CVAR) and state space techniques but most of these are developed under Bayesian frameworks. As conclusion, this branch of the compositional statistical analysis still requires a lot of advances and updates and, for this same reason, is a fertile field for future research. Social scientists should pay attention to future developments due to the extensive availability of this kind of data structures in socioeconomic databases.
    Keywords: compositional data analysis, time series
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–10–13
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpem:0510002&r=ecm
  10. By: Andreoni,J.; Harbaugh,W.T. (University of Wisconsin-Madison, Social Systems Research Institute)
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:att:wimass:200510&r=ecm
  11. By: J. Scott Armstrong; Kesten C. Green
    Abstract: We looked at evidence from comparative empirical studies to identify methods that can be useful for predicting demand in various situations and to warn against methods that should not be used. In general, use structured methods and avoid intuition, unstructured meetings, focus groups, and data mining. In situations where there are sufficient data, use quantitative methods including extrapolation, quantitative analogies, rule-based forecasting, and causal methods. Otherwise, use methods that structure judgement including surveys of intentions and expectations, judgmental bootstrapping, structured analogies, and simulated interaction. Managers' domain knowledge should be incorporated into statistical forecasts. Methods for combining forecasts, including Delphi and prediction markets, improve accuracy. We provide guidelines for the effective use of forecasts, including such procedures as scenarios. Few organizations use many of the methods described in this paper. Thus, there are opportunities to improve efficiency by adopting these forecasting practices.
    Keywords: Accuracy, expertise, forecasting, judgement, marketing.
    JEL: C53 M30 M31
    Date: 2005–09
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2005-24&r=ecm
  12. By: Brock,W.A.; Durlauf,S.N. (University of Wisconsin-Madison, Social Systems Research Institute)
    Date: 2004
    URL: http://d.repec.org/n?u=RePEc:att:wimass:20042&r=ecm
  13. By: Bouwman, Kees E.; Jacobs, Jan P.A.M. (Groningen University)
    Abstract: Real-time macroeconomic data are typically incomplete for today and the immediate past (‘ragged edge’) and subject to revision. To enable more timely forecasts the recent missing data have to be dealt with. In the context of the U.S. leading index we assess four alternatives, paying explicit attention to publication lags and data revisions.
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:dgr:rugccs:200505&r=ecm
  14. By: Nymoen, Ragnar (Dept. of Economics, University of Oslo)
    Abstract: Failures are not rare in economic forecasting, probably due to the high incidence of shocks and regime shifts in the economy. Thus, there is a premium on adaptation in the forecast process, in order to avoid sequences of forecast failure. This paper evaluates a sequence of inflation forecasts in the Norges Bank Inflation Report, and we present automatized forecasts which are unaffected by forecast failure. One conclusion is that the Norges Bank fan-charts are too narrow, giving an illusion of very precise forecasts. The automatized forecasts show more adaptation once shocks have occurred than is the case for the official forecasts. On the basis of the evidence, the recent inflation forecast failure appears to have been largely avoidable. The central bank’s understanding of the nature of the transmission mechanism and of the strenght and nature of the disinflationly shock that hit the economy appear to have played a major role in the recent forecast failure.
    Keywords: Inflation forecasts; Monetary policy; Forecast uncertainty; Fan-charts; Structural change; Econometric models.
    JEL: C32 C53 E37 E44 E47 E52 E58
    Date: 2005–08–10
    URL: http://d.repec.org/n?u=RePEc:hhs:osloec:2005_022&r=ecm
  15. By: Jeffrey Grogger
    Abstract: Forecasting welfare caseloads, particularly turning points, has become more important than ever. Since welfare reform, welfare has been funded via a block grant, which means that unforeseen changes in caseloads can have important fiscal implications for states. In this paper I develop forecasts based on the theory of Markov chains. Since today's caseload is a function of the past caseload, the caseload exhibits inertia. The method exploits that inertia, basing forecasts of the future caseload on past functions of entry and exit rates. In an application to California welfare data, the method accurately predicted the late-2003 turning point roughly one year in advance.
    JEL: I3
    Date: 2005–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:11682&r=ecm
  16. By: Nobuaki Hoshino (Faculty of Economics, Kanazawa University)
    Abstract: A random clustering distribution is useful for modeling count data. The present article derives a new distribution of this type from the Lagrangian Poisson distribution, based on the result that any infinitely divisible distribution over nonnegative integers produces a random clustering distribution through conditioning and a limiting argument that is equivalent to the law of small numbers. The resulting distribution is shown to be tractable. Its application is also presented.
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2005cf361&r=ecm
  17. By: Ranjan, Priya; Tobias, Justin
    Abstract: This paper seeks to empirically extend the \textit{gravity model}, which has been widely used to analyze volumes of trade between pairs of countries. We explicitly model the incidence of numerous zeros in our bilateral trade data by taking up the \textit{threshold tobit} variant of the gravity equation [Eaton and Tamura (1994)]. We generalize the basic threshold tobit model by allowing for the inclusion of country-specific effects into the analysis and also show how one can explore the relationship between trade volumes and a given covariate via a nonparametric approach, following the methodology of Koop and Poirier (2004). We use our derived methodology to investigate the impact of a particular aspect of institutions - the enforcement of contracts - on bilateral trade. We find that contract enforcement matters in predicting trade volumes for all types of goods, that it matters most for the trade of differentiated goods, and that the relationship between contract enforcement and trade in our threshold tobit exhibits some nonlinearities.
    JEL: C1
    Date: 2005–10–04
    URL: http://d.repec.org/n?u=RePEc:isu:genres:12427&r=ecm
  18. By: Durlauf,S.N.; Johnson,P.A.; Temple,J.R.W. (University of Wisconsin-Madison, Social Systems Research Institute)
    Date: 2004
    URL: http://d.repec.org/n?u=RePEc:att:wimass:200418&r=ecm

This nep-ecm issue is ©2005 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.