nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒10‒24
28 papers chosen by
Sune Karlsson
Orebro University

  1. A Data Mining Approach to Indirect Inference By Michael Creel
  2. "Moment-Based Estimation of Smooth Transition Regression Models with Endogenous Variables" By Waldyr Dutra Areosa; Michael McAleer; Marcelo C. Medeiros
  3. The Multivariate k-Nearest Neighbor Model for Dependent Variables : One-Sided Estimation and Forecasting. By Dominique Guegan; Patrick Rakotomarolahy
  4. Simple Regression Based Tests for Spatial Dependence By Benjamin Born; Jörg Breitung
  5. A Classical MCMC Approach to the Estimation of Limited Dependent Variable Models of Time Series By George Monokroussos
  6. IDENTIFYING DISTRIBUTIONAL CHARACTERISTICS IN RANDOM COEFFICIENTS PANEL DATA MODELS By Manuel Arellano; Stéphane Bonhomme
  7. Bootstrap-based Bandwidth Selection for Semiparametric Generalized Regression Estimators By Chuan Goh
  8. On Marginal Likelihood Computation in Change-point Models By Luc Bauwens; Jeroen V.K. Rombouts
  9. Information Criteria for Impulse Response Function Matching Estimation of DSGE Models By Alastair R. Hall; Atsushi Inoue; James M Nason; Barbara Rossi
  10. Nuisance parameters, composite likelihoods and a panel of GARCH models By Cavait Pakel; Neil Shephard; Kevin Sheppard
  11. Mean Shift detection under long-range dependencies with ART By Willert, Juliane
  12. Estimating WTP With Uncertainty Choice Contingent Valuation By Kelvin Balcombe; Aurelia Samuel; Iain Fraser
  13. Regression with Imputed Covariates:a Generalized Missing Indicator Approach By Valentino Dardanoni; Salvatore Modica; Franco Peracchi
  14. Structural Time Series Models and the Kalman Filter: a concise review By Jalles, Joao Tovar
  15. How To Pick The Best Regression Equation: A Review And Comparison Of Model Selection Algorithms By Jennifer L. Castle; Xiaochuan Qin; W. Robert Reed
  16. Real-Time Inflation Forecasting in a Changing World By Jan J. J. Groen; Richard Paap; Francesco Ravazzolo
  17. On the Use of Density Forecasts to Identify Asymmetry in Forecasters¡¯ Loss Functions By Kajal Lahiri; Fushang Liu
  18. A sequential modelling of the VaR By Alain Monfort.
  19. UNDERIDENTIFICATION? By Manuel Arellano; Lars Peter Hansen; Enrique Sentana
  20. Evaluation of Nonlinear time-series models for real-time business cycle analysis of the Euro. By Monica Billio; Laurent Ferrara; Dominique Guegan; Gian Luigi Mazzi
  21. A General Treatment of Non-Response Data From Choice Experiments Using Logit Models By Kelvin Balcombe; Iain Fraser
  22. High Watermarks of Market Risks. By Bertrand Maillet; Jean-Philippe Médecin; Thierry Michel
  23. Estimation and Inference in Unstable Nonlinear Least Squares Models By Otilia Boldea; Alastair R. Hall
  24. Detrending and the Distributional Properties of U.S. Output Time Series By Giorgio Fagiolo; Mauro Napoletano; Marco Piazza; Andrea Roventini
  25. A house price index defined in the potential outcomes framework By Nicholas Longford
  26. New panel tests to assess inflation persistence By Roy Cerqueti; Mauro Costantini; Luciano Gutierrez
  27. Measuring Forecast Uncertainty by Disagreement: The Missing Link By Kajal Lahiri; Xuguang Sheng
  28. Economists, Incentives, Judgment, and the European CVAR Approach to Macroeconometrics By David Colander

  1. By: Michael Creel
    Abstract: Consider a model with parameter phi, and an auxiliary model with parameter theta. Let phi be a randomly sampled from a given density over the known parameter space. Monte Carlo methods can be used to draw simulated data and compute the corresponding estimate of theta, say theta_tilde. A large set of tuples (phi, theta_tilde) can be generated in this manner. Nonparametric methods may be use to fit the function E(phi|theta_tilde=a), using these tuples. It is proposed to estimate phi using the fitted E(phi|theta_tilde=theta_hat), where theta_hat is the auxiliary estimate, using the real sample data. This is a consistent and asymptotically normally distributed estimator, under certain assumptions. Monte Carlo results for dynamic panel data and vector autoregressions show that this estimator can have very attractive small sample properties. Confidence intervals can be constructed using the quantiles of the phi for which theta_tilde is close to theta_hat. Such confidence intervals are found to have very accurate coverage.
    Keywords: simulation-based estimation; data mining; dynamic panel data; vector autoregression; bias reduction Abstract JEL codes: C13, C14, C15, C33
    Date: 2009–10–13
    URL: http://d.repec.org/n?u=RePEc:aub:autbar:788.09&r=ecm
  2. By: Waldyr Dutra Areosa (Department of Economics, Pontifical Catholic University of Rio de Janeiro and Banco Central do Brasil); Michael McAleer (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute and Center for International Research on the Japanese Economy (CIRJE), Faculty of Economics, University of Tokyo); Marcelo C. Medeiros (Department of Economics Pontifical Catholic University of Rio de Janeiro)
    Abstract: Nonlinear regression models have been widely used in practice for a variety of time series and cross-section datasets. For purposes of analyzing univariate and multivariate time series data, in particular, Smooth Transition Regression (STR) models have been shown to be very useful for representing and capturing asymmetric behavior. Most STR models have been applied to univariate processes, and have made a variety of assumptions, including stationary or cointegrated processes, uncorrelated, homoskedastic or conditionally heteroskedastic errors, and weakly exogenous regressors. Under the assumption of exogeneity, the standard method of estimation is nonlinear least squares. The primary purpose of this paper is to relax the assumption of weakly exogenous regressors and to discuss moment based methods for estimating STR models. The paper analyzes the properties of the STR model with endogenous variables by providing a diagnostic test of linearity of the underlying process under endogeneity, developing an estimation procedure and a misspecification test for the STR model, presenting the results of Monte Carlo simulations to show the usefulness of the model and estimation method, and providing an empirical application for inflation rate targeting in Brazil. We show that STR models with endogenous variables can be specified and estimated by a straightforward application of existing results in the literature.
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2009cf671&r=ecm
  3. By: Dominique Guegan (Paris School of Economics - Centre d'Economie de la Sorbonne); Patrick Rakotomarolahy (Centre d'Economie de la Sorbonne)
    Abstract: Forecasting current quarter GDP is a permanent task inside the central banks. Many models are known and proposed to solve this problem. Thanks to new results on the asymptotic normality of the multivariate k-nearest neighbor regression estimate, we propose an interesting and new approach to solve in particular the forecasting of economic indicators, included GDP modelling. Considering dependent mixing data sets, we prove the asymptotic normality of multivariate k-nearest neighbor regression estimate under weak conditions, providing confidence intervals for point forecasts. We introduce an application for economic indicators of euro area, and compare our method with other classical ARMA-GARCH modelling.
    Keywords: Multivariate k-nearest neighbor, asymptotic normality of the regression, mixing time series, confidence intervals, forecasts, economic indicators, Euro area.
    JEL: C22 C53 E32
    Date: 2009–07
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:09050&r=ecm
  4. By: Benjamin Born; Jörg Breitung
    Abstract: We propose two simple diagnostic tests for spatial error autocorrelation and spatial lag dependence. The idea is to reformulate the testing problem such that the test statistics are asymptotically equivalent to the familiar LM test statistics. Specically, our version of the test is based on a simple auxiliary regression and an ordinary regression t-statistic can be used to test for spatial autocorrelation and lag dependence. We also propose a variant of the test that is robust to heteroskedasticity. This approach gives practitioners an easy to implement and robust alternative to existing tests. Monte Carlo studies show that our variants of the spatial LM tests possess comparable size and power properties even in small samples.
    Keywords: LM test, Moran I test, spatial correlation
    JEL: C12 C21
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:bon:bonedp:bgse23_2009&r=ecm
  5. By: George Monokroussos
    Abstract: Estimating Limited Dependent Variable Time Series models through standard extremum methods can be a daunting computational task because of the need for integration of high order multiple integrals and/or numerical optimization of difficult objective functions. This paper proposes a classical Markov Chain Monte Carlo (MCMC) estimation technique with data augmentation that overcomes both of these problems. The asymptotic properties of the proposed estimator are established. Furthermore, a practical and flexible algorithmic framework for this class of models is proposed and is illustrated using simulated data, thus also offering some insight into the small-sample biases of such estimators. Finally, the versatility of the proposed framework is illustrated with an application of a dynamic tobit model for the Open Market Desk's Daily Reaction Function.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:nya:albaec:0907&r=ecm
  6. By: Manuel Arellano (CEMFI, Centro de Estudios Monetarios y Financieros); Stéphane Bonhomme (CEMFI, Centro de Estudios Monetarios y Financieros)
    Abstract: We study the identification of panel models with linear individual-specific coefficients, when T is fixed. We show identification of the variance of the effects under conditional uncorrelatedness. Identification requires restricted dependence of errors, reflecting a trade-off between heterogeneity and error dynamics. We show identification of the density of individual effects when errors follow an ARMA process under conditional independence. We discuss GMM estimation of moments of effects and errors, and introduce a simple density estimator of a slope effect in a special case. As an application we estimate the effect that a mother smokes during pregnancy on child’s birth weight.
    Keywords: Panel data, random coefficients, multiple effects, nonparametric identification.
    JEL: C23
    Date: 2009–08
    URL: http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2009_0904&r=ecm
  7. By: Chuan Goh
    Abstract: This paper considers the problem of implementing semiparametric extremum estimators of a generalized regression model with an unknown link function. The class of estimator under consideration includes as special cases the semiparametric least-squares estimator of Ichimura (1993) as well as the semiparametric quasi-likelihood estimator of Klein and Spady (1993). In general, it is assumed to involve the computation of a nonparametric kernel estimate of the link function that appears in place of the true, but unknown, link function in the appropriate location in a smooth criterion function. The specific question considered in this paper concerns the practical selection of the degree of smoothing to be used in computing the nonparametric regression estimate. This paper proposes a method for selecting the smoothing parameter via resampling. The particular method suggested here involves using a resample of smaller size than the original sample. Specific guidance on selecting the resample size is given, and simulation evidence is presented to illustrate the utility of this method for samples of moderate size.
    Keywords: Bandwidth selection, semiparametric, single-index model, bootstrap, m-out-of-n bootstrap, kernel smoothing
    JEL: C14
    Date: 2009–10–09
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-375&r=ecm
  8. By: Luc Bauwens; Jeroen V.K. Rombouts
    Abstract: Change-point models are useful for modeling times series subject to structural breaks. For interpretation and forecasting, it is essential to estimate correctly the number of change points in this class of models. In Bayesian inference, the number of change-points is typically chosen by the marginal likelihood criterion, computed by Chib’s method. This method requires to select a value in the parameter space at which the computation is done. We explain in detail how to perform Bayesian inference for a change point dynamic regression model and how to compute its marginal likelihood. Motivated by our results from three empirical illustrations, a simulation study shows that Chib’s method is robust with respect to the choice of the parameter value used in the computations, among posterior mean, mode and quartiles. Furthermore, the performance of the Bayesian information criterion, which is based on maximum likelihood estimates, in selecting the correct model is comparable to that of the marginal likelihood.
    Keywords: BIC, Change-point model, Chib's method, Marginal likelihood
    JEL: C11 C22 C53
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:lvl:lacicr:0942&r=ecm
  9. By: Alastair R. Hall; Atsushi Inoue; James M Nason; Barbara Rossi
    Abstract: We propose new information criteria for impulse response function matching estimators (IRFMEs). These estimators yield sampling distributions of the structural parameters of dynamic sto- chastic general equilibrium (DSGE) models by minimizing the distance between sample and theoretical impulse responses. First, we propose an information criterion to select only the responses that produce consistent estimates of the true but unknown structural parameters: the Valid Impulse Response Selection Criterion (VIRSC). The criterion is especially useful for mis-speci?ed models. Second, we propose a criterion to select the impulse responses that are most informative about DSGE model parameters: the Relevant Impulse Response Selection Criterion (RIRSC). These criteria can be used in combination to select the subset of valid impulse response functions with minimal dimension that yields asymptotically efficient estimators. The criteria are general enough to apply to impulse responses estimated by VARs, local projections, and simulation methods. We show that the use of our criteria signi?cantly a¤ects estimates and inference about key parameters of two well-known new Keynesian DSGE models. Monte Carlo evidence indicates that the criteria yield gains in terms of ?nite sample bias as well as o¤ering tests statistics whose behavior is better approximated by ?rst order asymptotic theory. Thus, our criteria improve on existing methods used to implement IRFMEs.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:man:cgbcrp:127&r=ecm
  10. By: Cavait Pakel; Neil Shephard; Kevin Sheppard
    Abstract: We investigate the properties of the composite likelihood (CL) method for (T x NT) GARCH panels. The defining feature of a GARCH panel with time series length T is that, while nuisance parameters are allowed to vary across NT series, other parameters of interest are assumed to be common. CL pools information across the panel instead of using information available in a single series only. Simulations and empirical analysis illustate that in reasonably large T CL performs well. However, due to the estimation error introduced through nuisance parameter estimation, CL is subject to the “incidental parameter” problem for small T.
    JEL: C14 C32
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:458&r=ecm
  11. By: Willert, Juliane
    Abstract: Atheoretical regression trees (ART) are applied to detect changes in the mean of a stationary long memory time series when location and number are unknown. It is shown that the BIC, which is almost always used as a pruning method, does not operate well in the long memory framework. A new method is developed to determine the number of mean shifts. A Monte Carlo Study and an application is given to show the performance of the method.
    Keywords: long memory; mean shift; regression tree; ART; BIC
    JEL: C14 C22
    Date: 2009–07–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:17874&r=ecm
  12. By: Kelvin Balcombe; Aurelia Samuel; Iain Fraser
    Abstract: A method for treating Contingent Valuation data obtained from a polychotomous response format designed to accommodate respondent uncertainty is developed. The parameters that determine the probability of indefinite responses are estimated and used to truncate utility distributions within a structural model. The likelihood function for this model is derived, along with the posterior distributions that can be used for estimation within a Bayesian Monte Carlo Markov Chain framework. We use this model to examine two data sets and test a number of model related hypotheses. Our results are consistent with those from the psychology literature relating to uncertain response: a ‘probable no’ is more likely to suggest a defiite no, than a ‘probable yes’ is likely to suggest a definite yes. We also find that ’don’t know’ responses are context dependent. Comparing the performance of the methods developed in this paper with the ordered Probit which has been previously used in the literature with this type of data we find our methods outperform the ordered Probit for one of the data sets used.
    Keywords: Respondent uncertainty; multiple bound contingent valuation; Bayesian MCMC
    JEL: C35 I18 Q5
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:ukc:ukcedp:0921&r=ecm
  13. By: Valentino Dardanoni (University of Palermo); Salvatore Modica (University of Palermo); Franco Peracchi (Faculty of Economics, University of Rome "Tor Vergata")
    Abstract: A common problem in applied regression analysis is that covariate values may be missing for some observations but imputed values may be available. This situation generates a trade-off between bias and precision: the complete cases are often disarmingly few, but replacing the missing observations with the imputed values to gain precision may lead to bias. In this paper we formalize this trade-off by showing that one can augment the regression model with a set of auxiliary variables so as to obtain, under weak assumptions about the imputations, the same unbiased estimator of the parameters of interest as complete-case analysis. Given this augmented model, the bias-precision trade-off may then be tackled by either model reduction procedures or model averaging methods. We illustrate our approach by considering the problem of estimating the relation between income and the body mass index (BMI) using survey data affected by item non-response, where the missing values on the main covariates are filled in by imputations.
    Keywords: Missing covariates; Imputations; Bias-precision trade-off; Model reduction;Model averaging; BMI and income.
    JEL: C12 C13 C19
    Date: 2009–10–08
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:150&r=ecm
  14. By: Jalles, Joao Tovar
    Abstract: The continued increase in availability of economic data in recent years and, more impor- tantly, the possibility to construct larger frequency time series, have fostered the use (and development) of statistical and econometric techniques to treat them more accurately. This paper presents an exposition of structural time series models by which a time series can be decomposed as the sum of a trend, seasonal and irregular components. In addition to a detailled analysis of univariate speci?cations we also address the SUTSE multivariate case and the issue of cointegration. Finally, the recursive estimation and smoothing by means of the Kalman ?lter algorithm is described taking into account its di¤erent stages, from initialisation to parameter?s estimation. JEL codes: C10, C22, C32
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:unl:unlfep:wp541&r=ecm
  15. By: Jennifer L. Castle; Xiaochuan Qin; W. Robert Reed (University of Canterbury)
    Abstract: This paper reviews and compares twenty-one different model selection algorithms (MSAs) representing a diversity of approaches, including (i) information criteria such as AIC and SIC; (ii) selection of a “portfolio” or best subset of models; (iii) general-to-specific algorithms, (iv) forward-stepwise regression approaches; (v) Bayesian Model Averaging; and (vi) inclusion of all variables. We use coefficient unconditional mean-squared error (UMSE) as the basis for our measure of MSA performance. Our main goal is to identify the factors that determine MSA performance. Towards this end, we conduct Monte Carlo experiments across a variety of data environments. Our experiments show that MSAs differ substantially with respect to their performance on relevant and irrelevant variables. We relate this to their associated penalty functions, and a bias-variance tradeoff in coefficient estimates. It follows that no MSA will dominate under all conditions. However, when we restrict our analysis to conditions where automatic variable selection is likely to be of greatest value, we find that two general-to-specific MSAs, Autometrics, do as well or better than all others in over 90% of the experiments. JEL Classifications: C52; C15
    Keywords: Model selection algorithms; Information Criteria; General-to-Specific modeling; Bayesian Model Averaging; Portfolio Models; AIC; SIC; AICc; SICc; Monte Carlo Analysis; Autometrics
    Date: 2009–10–01
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:09/13&r=ecm
  16. By: Jan J. J. Groen (Federal Reserve Bank of New York); Richard Paap; Francesco Ravazzolo (Norges Bank (Central Bank of Norway))
    Abstract: This paper revisits ination forecasting using reduced form Phillips curve forecasts, i.e., inflation forecasts using activity and expectations variables. We propose a Phillips curve-type model that results from averaging across different regression specifications selected from a set of potential predictors. The set of predictors includes lagged values of inflation, a host of real activity data, term structure data, nominal data and surveys. In each of the individual specifications we allow for stochastic breaks in regression parameters, where the breaks are described as occasional shocks of random magnitude. As such, our framework simultaneously addresses structural change and model certainty that unavoidably affects Phillips curve forecasts. We use this framework to describe PCE deflator and GDP deflator inflation rates for the United States across the post-WWII period. Over the full 1960-2008 sample the framework indicates several structural breaks across different combinations of activity measures. These breaks often coincide with, amongst others, policy regime changes and oil price shocks. In contrast to many previous studies, we find less evidence for autonomous variance breaks and inflation gap persistence. Through a real-time out-of-sample forecasting exercise we show that our model specification generally provides superior one-quarter and one-year ahead forecasts for quarterly inflation relative to a whole range of forecasting models that are typically used in the literature.
    Keywords: Inflation forecasting, Phillips correlations, real-time data, structural breaks, model uncertainty, Bayesian model averaging.
    JEL: C11 C22 C53 E31
    Date: 2009–08–01
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2009_16&r=ecm
  17. By: Kajal Lahiri; Fushang Liu
    Abstract: Abstract: We consider how to use information from reported density forecasts from surveys to identify asymmetry in forecasters¡¯ loss functions. We show that, for the three common loss functions - Lin-Lin, Linex, and Quad-Quad - we can infer the direction of loss asymmetry by just comparing point forecasts and the central tendency (mean or median) of the underlying density forecasts. If we know the entire distribution of the density forecast, we can calculate the loss function parameters based on the first order condition of forecast optimality. This method is applied to forecasts for annual real output growth and inflation obtained from the Survey of Professional Forecasters (SPF). We find that forecasters treat underprediction of real output growth more dearly than overprediction, reverse is true for inflation.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:nya:albaec:0903&r=ecm
  18. By: Alain Monfort.
    Abstract: We consider the VaR associated with the global loss generated by a set risk sources. We propose a sequence of simple models incorporating progressively the notions of contagion due to instantaneous correlations, of serial correlation, of evolution of the instantaneous correlations, of volatility clustering, of conditional heteroskedasticity and of persistency of shocks. The tools used are the standard and extended Kalman filters.
    Keywords: VaR, factor models, correlation, volatility clustering, Kalman filter.
    JEL: C10 G11
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:bfr:banfra:250&r=ecm
  19. By: Manuel Arellano (CEMFI, Centro de Estudios Monetarios y Financieros); Lars Peter Hansen (Chicago University); Enrique Sentana (CEMFI, Centro de Estudios Monetarios y Financieros)
    Abstract: We develop methods for testing the hypothesis that an econometric model is undeerindentified and inferring the nature of the failed identification. By adopting a generalized-method-of moments perspective, we feature directly the structural relations and we allow for nonlinearity in the econometric specification. We establish the link between a test for overidentifacion and our proposed test for underidentification. If, after attempting to replicate the structural relation, we find substantial evidence against the overidentifying restrictions of an augmented model, this is evidence against underidentification of the original model.
    Date: 2009–08
    URL: http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2009_0905&r=ecm
  20. By: Monica Billio (University Ca' Foscari of Venice); Laurent Ferrara (Banque de France); Dominique Guegan (Paris School of Economics - Centre d'Economie de la Sorbonne); Gian Luigi Mazzi (Eurostat)
    Abstract: In this paper, we aim at assessing Markov-switching and threshold models in their ability to identify turning points of economic cycles. By using vintage data that are updated on a monthly basis, we compare their ability to detect ex-post the occurrence of turning points of the classical business cycle, we evaluate the stability over time of the signal emitted by the models and assess their ability to detect in real-time recession signals. In this respect, we have built an historical vintage database for the Euro area going back to 1970 for two monthly macroeconomic variables of major importance for short-term economic outlook, namely the Industrial Production Index and the Unemployment Rate.
    Keywords: Business cycle, Euro zone, Markov switching model, SETAR mpdel, unemployment, industrial production.
    JEL: C22 C52
    Date: 2009–08
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:09053&r=ecm
  21. By: Kelvin Balcombe; Iain Fraser
    Abstract: A new approach is developed for the treatment of 'Don't Know' (DK) responses, within Choice Experiments. A DK option is motivated by the need to allow respondents the opportunity to express uncertainty. Our model explains a DK using an entropy measure of the similarity between options given to respondents within the Choice Experiment. We illustrate our model by applying it to a Choice Experiment examining consumer preferences for nutrient contents in food. We find that similarity between options in a given choice set does explain the tendency for respondents to report DK.
    Keywords: Choice Experiment; Respondent Uncertainty; Bayesian Methods
    JEL: C35 I18 Q18
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:ukc:ukcedp:0916&r=ecm
  22. By: Bertrand Maillet (Centre d'Economie de la Sorbonne, EIF, A.A.Advisors-QCG (ABN AMRO)and Variances); Jean-Philippe Médecin (Centre d'Economie de la Sorbonne and Variances); Thierry Michel (LODH)
    Abstract: We present several estimates of measures of risk amongst the most well-known, using both high and low frequency data. The aim of the article is to show which lower frequency measures can be an acceptable substitute to the high precision measures, when transaction data is unavailable for a long history. We also study the distribution of the volatility, focusing more precisely on the slopee of the tail of the various risk measure distributions, in order to define the high watermarks of market risks. Based on estimates of the tail index of a Generalized Extreme Value density backed-out from the high frequency CAC 40 series in the period 1997-2006, using both Maximum Likelihood and L-moment Methods, we, finally find no evidence for the need of a specification with heavier tails than in the case of the traditional log-normal hypothesis.
    Keywords: Financial crisis, volatility estimator distributions, range-based volatility, extreme value, high frequency data.
    JEL: G10 G14
    Date: 2009–08
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:09054&r=ecm
  23. By: Otilia Boldea; Alastair R. Hall
    Abstract: We are grateful to Mehmet Caner, Manfred Deistler, John Einmahl, Atsushi Inoue and Denise Osborn for their comments, as well as for the comments of participants at the presentation of this paper at the Conference on Breaks and Persistence in Econometrics, London, UK, December 2006, Inference and Tests in Econometrics, Marseille, France, April 2008, European Meetings of the Econometric Society, Milan, Italy, August 2008, NBER-NSF Time Series Conference, Aarhus, Denmark, September 2008, Fed St. Louis Applied Econometrics and Forecasting in Macroeconomics Workshop, St. Louis, October 2008, and at the seminars in Tilburg University, University of Manchester, University of Exeter, University of Cambridge, University of Southampton, Tinbergen Institute, UC Davis and Institute for Advanced Studies, Vienna. The second author acknowledges the support of ESRC grant RES-062-23-1351.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:man:cgbcrp:126&r=ecm
  24. By: Giorgio Fagiolo; Mauro Napoletano; Marco Piazza; Andrea Roventini
    Abstract: We study the impact of alternative detrending techniques on the distributional properties of U.S. output time series. We detrend GDP and industrial production time series employing first-differencing, Hodrick-Prescott and bandpass filters. We show that the resulting distributions can be approximated by symmetric Exponential-Power densities, with tails fatter than those of a Gaussian. We also employ frequency-band decomposition procedures finding that fat tails occur more likely at high and medium business-cycle frequencies. These results confirm the robustness of the fat-tail property of detrended output time-series distributions and suggest that business-cycle models should take into account this empirical regularity.
    Keywords: Statistical Distributions, Detrending, HP Filter, Bandpass Filter, Normality, Fat Tails, Time Series, Exponential-Power Density, Business Cycles Dynamics
    JEL: C1 E3
    Date: 2009–10–14
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2009/14&r=ecm
  25. By: Nicholas Longford
    Abstract: Current methods for constructing house price indices are based on comparisons of sale prices of residential properties sold two or more times and on regression of the sale prices on the attributes of the properties and of their locations. The two methods have well recognised deficiencies, selection bias and model assumptions, respectively. We introduce a new method based on propensity score matching. The average house prices for two periods are compared by selecting pairs of properties, one sold in each period, that are as similar on a set of available attributes (covariates) as is feasible to arrange. The uncertainty associated with such matching is addressed by multiple imputation, framing the problem as involving missing values. The method is applied to aregister of transactions ofresidential properties in New Zealand and compared with the established alternatives.
    Keywords: Hedonic regression, house prices, matching, potential outcomes, propensity scoring, repeat-sales method
    JEL: C1 C13 C15 C3 C31 E3 E31
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1175&r=ecm
  26. By: Roy Cerqueti (University of Macerata); Mauro Costantini (University of Vienna); Luciano Gutierrez (University of Sassari)
    Abstract: <p><font face="CMR9" size="1"><font face="CMR9" size="1"><p align="left">In this paper we propose new panel tests to detect changes in persistence. The test statistics</p><p align="left">are used to test the null hypothesis of stationarity against the alternative of a change in</p><p align="left">persistence from I(0) to I(1), from I(1) to I(0), and in an unknown direction. The limiting</p><p align="left">distributions of the tests under the hypothesis of cross-sectional independence are derived.</p><p align="left">Cross-sectional dependence is also considered. The tests are applied to the in°ation rates of</p><p align="left">19 OECD countries over the period 1972-2008. Evidence of a change in persistence from I(1)</p><p align="left">to I(0) is found for a set of these countries</p></font></font></p>
    Keywords: Panel data,Persistence,Stationarity
    JEL: C12 C23
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:mcr:wpdief:wpaper00054&r=ecm
  27. By: Kajal Lahiri; Xuguang Sheng
    Abstract: Using a standard decomposition of forecasts errors into common and idiosyncratic shocks, we show that aggregate forecast uncertainty can be expressed as the disagreement among the forecasters plus the perceived variability of future aggregate shocks. Thus, the reliability of disagreement as a proxy for uncertainty will be determined by the stability of the forecasting environment, and the length of the forecast horizon. Using density forecasts from the Survey of Professional Forecasters, we find direct evidence in support of our hypothesis. Our results support the use of GARCH-type models, rather than the ex post squared errors in consensus forecasts, to estimate the ex ante variability of aggregate shocks as a component of aggregate uncertainty.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:nya:albaec:0906&r=ecm
  28. By: David Colander
    Abstract: This paper argues that the DSGE approach to macroeconometrics is the dominant approach because it meets the institutional needs of the replicator dynamics of the profession, not because it is necessarily the best way to do macroeconometrics. It further argues that this “DSGE-theory first” approach is inconsistent with the historical approach that economists have advocated in the past and that the alternative European CVAR approach is much more consistent with economist’s historically used methodology, correctly understood. However, because the European CVAR approach requires explicit researcher judgment, it does not do well in the replicator dynamics of the profession. The paper concludes with the suggestion that there should be an increase in dialog between the two approaches.
    Keywords: methodology, macroeconometrics, general to specific, DSGE, VAR, judgment, incentives
    JEL: C10 A1
    Date: 2009–12
    URL: http://d.repec.org/n?u=RePEc:mdl:mdlpap:0912&r=ecm

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.