nep-ecm New Economics Papers
on Econometrics
Issue of 2005‒09‒29
25 papers chosen by
Sune Karlsson
Orebro University

  1. Testing for Stochastic Dominance Efficiency By Olivier Scaillet; Nikolas Topaloglou
  2. Estimation and Testing of Dynamic Models with Generalized Hyperbolic Innovations By Mencía, Javier; Sentana, Enrique
  3. Bayesian Analysis of DSGE Models By An, Sungbae; Schorfheide, Frank
  4. A Test for Mean-Variance Efficiency of a given Portfolio under Restrictions By Post, G.T.
  5. Some Properties of Tests for Possibly Unidentified Parameters By G. Forchini
  6. Asymptotic convergence of weighted random matrices: nonparametric cointegration analysis for I(2) processes. By Roy Cerqueti and Mauro Costantini
  7. Exponential Tilting with Weak Instruments: Estimation and Testing By MEHMET CANER
  8. Testing for Parameter Stability in Dynamic Models across Frequencies By Candelon,Bertrand; Cubadda,Gianluca
  9. Multiple imputation of time series: an application to the construction of historical price indexes By Fernando TUSELL PALMER
  10. Moment Based Inference with Stratified Data By Gautam Tripathi
  11. Testing for Stochastic Dominance Efficiency By Post, G.T.; Linton, O.; Whang, Y-J
  12. Omitted Variables and Misspecified Disturbances in the Logit Model By J.S. Cramer
  13. Identification of the Effects of Dynamic Treatments by Sequential Conditional Independence Assumptions By Michael Lechner; Ruth Miquel
  14. Near Exogeneity and Weak Identification in Generalized Empirical Likelihood Estimators: Fixed and Many Moment Asymptotics By MEHMET CANER
  15. Testing for mean-coherent regular risk spanning By Melenberg,Bertrand; Polbennikov,Simon
  16. Forecasts of U.S. short-term interest rates: a flexible forecast combination approach By Massimo Guidolin; Allan Timmerman
  17. Forecasting Canadian Time Series with the New-Keynesian Model By Ali Dib; Mohamed Gammoudi; Kevin Moran
  18. General-to-specific modeling: an overview and selected bibliography By Julia Campos; Neil R. Ericsson; David F. Hendry
  19. Boundedly Pivotal Structural Change Tests in Continuous Updating GMM with Strong, Weak Identification and Completely Unidentified Cases By MEHMET CANER
  20. From Correspondence Analysis to Multiple and Joint Correspondence Analysis By Michael Greenacre
  21. NEARLY SINGULAR DESIGN IN GMM AND GENERALIZED EMPIRICAL LIKELIHOOD ESTIMATORS By MEHMET CANER
  22. Instrumental Variables Methods in Experimental Criminological Research: What, Why, and How? By Joshua Angrist
  23. The Consequences of Non-Classical Measurement Error for Distributional Analysis By D. O'Neill; Sweetman. O.; Van de gaer D.
  24. Nowcasting GDP and Inflation: The Real Time Informational Content of Macroeconomic Data Releases By Giannone, Domenico; Reichlin, Lucrezia; Small, David
  25. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes By Fok, D.; Paap, R.; Horv?th, C.; Franses, Ph.H.B.F.

  1. By: Olivier Scaillet (HEC, University of Geneva and FAME); Nikolas Topaloglou (HEC, University of Geneva)
    Abstract: We consider consistent tests for stochastic dominance efficiency at any order of a given portfolio with respect to all possible portfolios constructed from a set of assets. We propose and justify approaches based on simulation and the block bootstrap to achieve valid inference in a time series setting. The test statistics and the estimators are computed using linear and mixed integer programming methods. The empirical application shows that the Fama and French market portfolio is FSD and SSD efficient, although it is mean-variance inefficient.
    Keywords: Nonparametric, Stochastic Ordering; Dominance Efficiency; Linear Programming; Mixed Integer Programming; Simulation; Bootstrap
    JEL: C12 C13 C15 C44 D81 G11
    Date: 2005–07
    URL: http://d.repec.org/n?u=RePEc:fam:rpseri:rp154&r=ecm
  2. By: Mencía, Javier; Sentana, Enrique
    Abstract: We analyse the Generalised Hyperbolic distribution adequacy to model kurtosis and asymmetries in multivariate conditionally heteroskedastic dynamic regression models. We standardise this distribution, obtain analytical expressions for the log-likelihood score, and explain how to evaluate the information matrix. We also derive tests for the null hypotheses of multivariate normal and Student t innovations, and decompose them into skewness and kurtosis components, from which we obtain more powerful one-sided versions. Finally, we present an empirical application to five NASDAQ sectorial stock returns that indicates that their conditional distribution is asymmetric and leptokurtic, which can be successfully exploited for risk management purposes.
    Keywords: inequality constraints; Kurtosis; Multivariate Normality Test; skewness; student t; Supremum Test; tail dependence
    JEL: C32 C52 G11
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:5177&r=ecm
  3. By: An, Sungbae; Schorfheide, Frank
    Abstract: This paper reviews Bayesian methods that have been developed in recent years to estimate and evaluate dynamic stochastic general equilibrium (DSGE) models. We consider the estimation of linearized DSGE models, the evaluation of models based on Bayesian model checking, posterior odds comparisons, and comparisons to a reference model, as well as the estimation of second-order accurate solutions of DSGE models. These methods are applied to data generated from a linearized DSGE model, a vector autoregression that violates the cross-coefficient restrictions implied by the linearized DSGE model, and a DSGE model that was solved with a second-order perturbation method.
    Keywords: Bayesian analysis; DSGE models; model evaluation; vector autoregressions
    JEL: C11 C32 C51 C52
    Date: 2005–09
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:5207&r=ecm
  4. By: Post, G.T. (Erasmus Research Institute of Management (ERIM), RSM Erasmus University)
    Abstract: This study proposes a test for mean-variance efficiency of a given portfolio under general linear investment restrictions. We introduce a new definition of pricing error or “alpha†and as an efficiency measure we propose to use the largest positive alpha for any vertex of the portfolio possibilities set. To allow for statistical inference, we derive the asymptotic least favorable sampling distribution of this test statistic. Using the new test, we cannot reject market portfolio efficiency relative to beta decile stock portfolios if short-selling is not allowed.
    Keywords: Mean-variance Efficiency;Portfolio Constraints;Asset Pricing;Portfolio Analysis;
    Date: 2005–06–28
    URL: http://d.repec.org/n?u=RePEc:dgr:eureri:30007066&r=ecm
  5. By: G. Forchini
    Abstract: It is well known that confidence intervals for weakly identified parameters are unbounded with positive probability (e.g. Dufour, Econometrica 65, pp. 1365-1387 and Staiger and Stock, Econometrica 65, pp. 557-586), and that the asymptotic risk of their estimators is unbounded (Pötscher, Econometrica 70, pp.1035-1065). In this note we extend these "impossibility results" and show that uniformly consistent tests for weakly identified parameters do not exist. We also show that all similar tests of size a < 1/2 concerning possibly unidentified parameters have type II error probability that can be as large as 1 - a.
    Keywords: Similar tests, consistent tests, weak instruments, identification
    JEL: C12 C30
    Date: 2005–09
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2005-21&r=ecm
  6. By: Roy Cerqueti and Mauro Costantini
    Abstract: The aim of this paper is to provide a new perspective on the nonparametric co-integration analysis for integrated processes of the second order. Our analysis focus on a pair of random matrices related to such integrated process. Such matrices are constructed by introducing some weight functions. Under asymptotic conditions on such weights, convergence results in distribution are obtained. Therefore, a generalized eigenvalue problem is solved. Differential equations and stochastic calculus theory are used.
    Keywords: Co-integration, Nonparametric, Differential equations, Asymptotic properties.
    JEL: C14 C32 C65
    Date: 2005–09–09
    URL: http://d.repec.org/n?u=RePEc:mol:ecsdps:esdp05027&r=ecm
  7. By: MEHMET CANER (UNIVERSITY OF PITTSBURGH)
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–09–12
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpem:0509017&r=ecm
  8. By: Candelon,Bertrand; Cubadda,Gianluca (METEOR)
    Abstract: This paper contributes to the econometric literature on structural breaks by proposinga test for parameter stability in VAR models at a particular frequency ω, where ω ∈ [0, π].When a dynamic model is affected by a structural break, the new tests allow for detectingwhich frequencies of the data are responsible for parameter instability. If the model is locallystable at the frequencies of interest, the whole sample size can be then exploited despite the presence of a break. Two empirical examples illustrate that local instability can concernonly the lower frequencies (decrease in the postwar U.S. productivity) or higher frequencies(change in the U.S. monetary policy in the early 80’s).
    Keywords: econometrics;
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:dgr:umamet:2005022&r=ecm
  9. By: Fernando TUSELL PALMER (Facultad de CC.EE. y Empresariales, Unviersidad del País Vasco.)
    Abstract: Time series in many areas of application, and notably in the social sciences, are frequently incomplete. This is particularly annoying when we need to have complete data, for instance to compute indexes as a weighted average of values from a number of time series; whenever a single datum is absent, the index cannot be computed. This paper proposes to deal with such situations by creating multiple completed trajectories, drawing on state space modelling of time series, the simulation smoother and multiple imputation ideas.
    Keywords: multiple imputation;time series analysis; Kalman smooth
    JEL: C22 C43
    Date: 2005–09–23
    URL: http://d.repec.org/n?u=RePEc:ehu:biltok:200503&r=ecm
  10. By: Gautam Tripathi (University of Connecticut)
    Abstract: Many datasets used by economists and other social scientists are collected by stratified sampling. The sampling scheme used to collect the data induces a probability distribution on the realized observations that differs from the target or underlying distribution for which inference is to be made. If the distinction between target and realized distributions is not taken into account, statistical inference can be severely biased. This paper shows how to do efficient empirical likelihood based semiparametric inference in moment restriction models when data from the target population is collected by three widely used sampling schemes: variable probability sampling, multinomial sampling, and standard stratified sampling.
    Keywords: Empirical likelihood, Moment conditions, Stratified sampling.
    JEL: C14
    Date: 2005–09
    URL: http://d.repec.org/n?u=RePEc:uct:uconnp:2005-38&r=ecm
  11. By: Post, G.T.; Linton, O.; Whang, Y-J (Erasmus Research Institute of Management (ERIM), RSM Erasmus University)
    Abstract: We propose a new test of the stochastic dominance efficiency of a given portfolio over a class of portfolios. We establish its null and alternative asymptotic properties, and define a method for consistently estimating critical values. We present some numerical evidence that our tests work well in moderate sized samples.
    Keywords: Stochastic Dominance;Portfolio Diversification;Asset Pricing;Portfolio Analysis;
    Date: 2005–06–28
    URL: http://d.repec.org/n?u=RePEc:dgr:eureri:30007067&r=ecm
  12. By: J.S. Cramer (University of Amsterdam)
    Abstract: In binary discrete regression models like logit or probit the omis- sion of a relevant regressor (even if it is orthogonal) depresses the re- maining <font face="Symbol">b</font> coefficients towards zero. For the probit model, Wooldridge (2002) has shown that this bias does not carry over to the effect of the regressor on the outcome. We find by simulations that this also holds for logit models, even when the omitted variable leads to severe misspecification of the disturbance. More simulations show that es- timates of these effects by logit analysis are also impervious to pure misspecification of the disturbance.
    Keywords: logit model; omitted variables; misspecification
    JEL: C25
    Date: 2005–09–15
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20050084&r=ecm
  13. By: Michael Lechner; Ruth Miquel
    Abstract: This paper approaches the causal analysis of sequences of interventions from a potential outcome perspective. The identifying power of several different assumptions concerning the connection between the dynamic selection process and the outcomes of different sequences is discussed. The assumptions invoke different randomisation assumptions which are compatible with different selection regimes. Parametric forms are not involved. When participation in the sequences is decided every period depending on its success so far, the resulting endogeneity problem destroys nonparametric identification for many parameters of interest. However, some interesting dynamic forms of the average treatment effect are identified. As an empirical example for the application of this approach, we reexamine the effects of training programmes for the unemployed in West Germany.
    JEL: C21 C31
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:usg:dp2005:2005-17&r=ecm
  14. By: MEHMET CANER (UNIVERSITY OF PITTSBURGH)
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–09–12
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpem:0509018&r=ecm
  15. By: Melenberg,Bertrand; Polbennikov,Simon (Tilburg University, Center for Economic Research)
    Abstract: Coherent risk measures have received considerable attention in the recent literature. Coherent regular risk measures form an important subclass: they are empirically identifiable, and, when combined with mean return, they are consistent with second order stochastic dominance. As a consequence, these risk measures are natural candidates in a mean-risk trade-off portfolio choice. In this paper we develop a mean-coherent regular risk spanning test and related performance measure. The test and the performance measure can be implemented by means of a simple semi-parametric instrumental variable regression, where instruments have a direct link with the stochastic discount factor. We illustrate applications of the spanning test and the performance measure for several coherent regular risk measures, including the well known expected shortfall.
    Keywords: portfolio choice;coherent risk;spanning test
    JEL: G11 D81
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:200599&r=ecm
  16. By: Massimo Guidolin; Allan Timmerman
    Abstract: We propose a four-state multivariate regime switching model to capture common latent factors driving short-term spot and forward rates in the US. For this class of models we develop a flexible approach to combine forecasts of future spot rates with forecasts from alternative sources such as time-series models or models capturing macroeconomic information. We find strong empirical evidence that accounting for both regimes in interest rate dynamics and combining forecasts from different models helps improve the out-of-sample forecasting performance for short-term interest rates in the US. Theoretical restrictions from the expectations hypothesis when imposed on the forecasting model are found to help only at long forecasting horizons.
    Keywords: Interest rates ; Forecasting
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2005-059&r=ecm
  17. By: Ali Dib; Mohamed Gammoudi; Kevin Moran
    Abstract: This paper documents the out-of-sample forecasting accuracy of the New Keynesian Model for Canada. We repeatedly estimate our variant of the model on a series of rolling subsamples, forecasting out-of-sample one to eight quarters ahead at each step. We then compare these forecasts to those arising from simple VARs, using econometric tests of forecasting accuracy. Our results show that the forecasting accuracy of the New Keynesian model compares favourably to that of the benchmarks, particularly as the forecasting horizon increases. These results suggest that the model can become a useful forecasting tool for Canadian time series. The principle of parsimony is invoked to explain our findings.
    Keywords: New Keynesian Model, Forecasting accuracy
    JEL: C53 E37
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:lvl:lacicr:0527&r=ecm
  18. By: Julia Campos; Neil R. Ericsson; David F. Hendry
    Abstract: This paper discusses the econometric methodology of general-to-specific modeling, in which the modeler simplifies an initially general model that adequately characterizes the empirical evidence within his or her theoretical framework. Central aspects of this approach include the theory of reduction, dynamic specification, model selection procedures, model selection criteria, model comparison, encompassing, computer automation, and empirical implementation. This paper thus reviews the theory of reduction, summarizes the approach of general-to-specific modeling, and discusses the econometrics of model selection, noting that general-to-specific modeling is the practical embodiment of reduction. This paper then summarizes fifty-seven articles key to the development of general-to-specific modeling.
    Keywords: Econometrics ; Econometric models
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:fip:fedgif:838&r=ecm
  19. By: MEHMET CANER (UNIVERSITY OF PITTSBURGH)
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–09–12
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpem:0509016&r=ecm
  20. By: Michael Greenacre
    Abstract: The generalization of simple (two-variable) correspondence analysis to more than two categorical variables, commonly referred to as multiple correspondence analysis, is neither obvious nor well-defined. We present two alternative ways of generalizing correspondence analysis, one based on the quantification of the variables and intercorrelation relationships, and the other based on the geometric ideas of simple correspondence analysis. We propose a version of multiple correspondence analysis, with adjusted principal inertias, as the method of choice for the geometric definition, since it contains simple correspondence analysis as an exact special case, which is not the situation of the standard generalizations. We also clarify the issue of supplementary point representation and the properties of joint correspondence analysis, a method that visualizes all two-way relationships between the variables. The methodology is illustrated using data on attitudes to science from the International Social Survey Program on Environment in 1993.
    Keywords: Correspondence analysis, eigendecomposition, joint correspondence analysis, multivariate categorical data, questionnaire data, singular value decomposition
    JEL: C19 C88
    Date: 2005–09
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:883&r=ecm
  21. By: MEHMET CANER (UNIVERSITY OF PITTSBURGH)
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–09–12
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpem:0509019&r=ecm
  22. By: Joshua Angrist
    Abstract: Quantitative criminology focuses on straightforward causal questions that are ideally addressed with randomized experiments. In practice, however, traditional randomized trials are difficult to implement in the untidy world of criminal justice. Even when randomized trials are implemented, not everyone is treated as intended and some control subjects may obtain experimental services. Treatments may also be more complicated than a simple yes/no coding can capture. This paper argues that the instrumental variables methods (IV) used by economists to solve omitted variables bias problems in observational studies also solve the major statistical problems that arise in imperfect criminological experiments. In general, IV methods estimate the causal effect of treatment on subjects that are induced to comply with a treatment by virtue of the random assignment of intended treatment. The use of IV in criminology is illustrated through a re-analysis of the Minneapolis Domestic Violence Experiment.
    JEL: C21 C31
    Date: 2005–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberte:0314&r=ecm
  23. By: D. O'Neill (Department of Economics, Maynooth, Ireland); Sweetman. O.; Van de gaer D.
    Abstract: This paper analyzes the consequences of non-classical measurement error for distributional analysis. We show that for a popular set of distributions negative correlation between the measurement error (u) and the true value (y) may reduce the bias in the estimated distribution at every value of y. For other distributions the impact of non-classical measurement di¤ers throughout the support of the distribution. We illustrate the practical importance of these results using models of unemployment duration and income.
    Keywords: Distribution functions,Non-classical measurement error,
    Date: 2005–02
    URL: http://d.repec.org/n?u=RePEc:may:mayecw:n1490205&r=ecm
  24. By: Giannone, Domenico; Reichlin, Lucrezia; Small, David
    Abstract: This paper formalizes the process of updating the nowcast and forecast on output and inflation as new releases of data become available. The marginal contribution of a particular release for the value of the signal and its precision is evaluated by computing 'news' on the basis of an evolving conditioning information set. The marginal contribution is then split into what is due to timeliness of information and what is due to economic content. We find that the Federal Reserve Bank of Philadelphia surveys have a large marginal impact on the nowcast of both inflation variables and real variables and this effect is larger than that of the Employment Report. When we control for timeliness of the releases, the effect of hard data becomes sizeable. Prices and quantities affect the precision of the estimates of GDP while inflation is only affected by nominal variables and asset prices.
    Keywords: factor model; forecasting; large datasets; monetary policy; news; real time data
    JEL: C33 C53 E52
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:5178&r=ecm
  25. By: Fok, D.; Paap, R.; Horv?th, C.; Franses, Ph.H.B.F. (Erasmus Research Institute of Management (ERIM), RSM Erasmus University)
    Abstract: The authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate effects from the dynamic effects. In a second level of the model, the immediate price elasticities, the cumulative promotional price elasticity and the long-run regular price elasticity are correlated with various brand-speciffic and category-speciffic characteristics. The model is applied to seven years of data on weekly sales of 100 different brands in 25 product categories. We find many significant moderating effects on the elasticity of price promotions. Brands in categories that are characterized by high price differentiation and that constitute a lower share of budget are less sensitive to price discounts. Deep price discounts turn out to increase the immediate price sensitivity of customers. We also find significant effects for the cumulative elasticity. The immediate effect of a regular price change is often close to zero. The long-run effect of such a decrease usually amounts to an increase in sales. This is especially true in categories characterized by a large price dispersion, frequent price promotions and hedonic, non-perishable products.
    Keywords: Sales;Vector Autoregression;Marketing Mix;Promotional and Regular Price;Short and Long-term Effects;Hierarchical Bayes;
    Date: 2005–09–08
    URL: http://d.repec.org/n?u=RePEc:dgr:eureri:30007510&r=ecm

This nep-ecm issue is ©2005 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.