
on Econometrics 
By:  Olivier Scaillet (HEC, University of Geneva and FAME); Nikolas Topaloglou (HEC, University of Geneva) 
Abstract:  We consider consistent tests for stochastic dominance efficiency at any order of a given portfolio with respect to all possible portfolios constructed from a set of assets. We propose and justify approaches based on simulation and the block bootstrap to achieve valid inference in a time series setting. The test statistics and the estimators are computed using linear and mixed integer programming methods. The empirical application shows that the Fama and French market portfolio is FSD and SSD efficient, although it is meanvariance inefficient. 
Keywords:  Nonparametric, Stochastic Ordering; Dominance Efficiency; Linear Programming; Mixed Integer Programming; Simulation; Bootstrap 
JEL:  C12 C13 C15 C44 D81 G11 
Date:  2005–07 
URL:  http://d.repec.org/n?u=RePEc:fam:rpseri:rp154&r=ecm 
By:  Mencía, Javier; Sentana, Enrique 
Abstract:  We analyse the Generalised Hyperbolic distribution adequacy to model kurtosis and asymmetries in multivariate conditionally heteroskedastic dynamic regression models. We standardise this distribution, obtain analytical expressions for the loglikelihood score, and explain how to evaluate the information matrix. We also derive tests for the null hypotheses of multivariate normal and Student t innovations, and decompose them into skewness and kurtosis components, from which we obtain more powerful onesided versions. Finally, we present an empirical application to five NASDAQ sectorial stock returns that indicates that their conditional distribution is asymmetric and leptokurtic, which can be successfully exploited for risk management purposes. 
Keywords:  inequality constraints; Kurtosis; Multivariate Normality Test; skewness; student t; Supremum Test; tail dependence 
JEL:  C32 C52 G11 
Date:  2005–08 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5177&r=ecm 
By:  An, Sungbae; Schorfheide, Frank 
Abstract:  This paper reviews Bayesian methods that have been developed in recent years to estimate and evaluate dynamic stochastic general equilibrium (DSGE) models. We consider the estimation of linearized DSGE models, the evaluation of models based on Bayesian model checking, posterior odds comparisons, and comparisons to a reference model, as well as the estimation of secondorder accurate solutions of DSGE models. These methods are applied to data generated from a linearized DSGE model, a vector autoregression that violates the crosscoefficient restrictions implied by the linearized DSGE model, and a DSGE model that was solved with a secondorder perturbation method. 
Keywords:  Bayesian analysis; DSGE models; model evaluation; vector autoregressions 
JEL:  C11 C32 C51 C52 
Date:  2005–09 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5207&r=ecm 
By:  Post, G.T. (Erasmus Research Institute of Management (ERIM), RSM Erasmus University) 
Abstract:  This study proposes a test for meanvariance efficiency of a given portfolio under general linear investment restrictions. We introduce a new definition of pricing error or â€œalphaâ€ and as an efficiency measure we propose to use the largest positive alpha for any vertex of the portfolio possibilities set. To allow for statistical inference, we derive the asymptotic least favorable sampling distribution of this test statistic. Using the new test, we cannot reject market portfolio efficiency relative to beta decile stock portfolios if shortselling is not allowed. 
Keywords:  Meanvariance Efficiency;Portfolio Constraints;Asset Pricing;Portfolio Analysis; 
Date:  2005–06–28 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureri:30007066&r=ecm 
By:  G. Forchini 
Abstract:  It is well known that confidence intervals for weakly identified parameters are unbounded with positive probability (e.g. Dufour, Econometrica 65, pp. 13651387 and Staiger and Stock, Econometrica 65, pp. 557586), and that the asymptotic risk of their estimators is unbounded (Pötscher, Econometrica 70, pp.10351065). In this note we extend these "impossibility results" and show that uniformly consistent tests for weakly identified parameters do not exist. We also show that all similar tests of size a < 1/2 concerning possibly unidentified parameters have type II error probability that can be as large as 1  a. 
Keywords:  Similar tests, consistent tests, weak instruments, identification 
JEL:  C12 C30 
Date:  2005–09 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:200521&r=ecm 
By:  Roy Cerqueti and Mauro Costantini 
Abstract:  The aim of this paper is to provide a new perspective on the nonparametric cointegration analysis for integrated processes of the second order. Our analysis focus on a pair of random matrices related to such integrated process. Such matrices are constructed by introducing some weight functions. Under asymptotic conditions on such weights, convergence results in distribution are obtained. Therefore, a generalized eigenvalue problem is solved. Differential equations and stochastic calculus theory are used. 
Keywords:  Cointegration, Nonparametric, Differential equations, Asymptotic properties. 
JEL:  C14 C32 C65 
Date:  2005–09–09 
URL:  http://d.repec.org/n?u=RePEc:mol:ecsdps:esdp05027&r=ecm 
By:  MEHMET CANER (UNIVERSITY OF PITTSBURGH) 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–09–12 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0509017&r=ecm 
By:  Candelon,Bertrand; Cubadda,Gianluca (METEOR) 
Abstract:  This paper contributes to the econometric literature on structural breaks by proposinga test for parameter stability in VAR models at a particular frequency ω, where ω ∈ [0, π].When a dynamic model is affected by a structural break, the new tests allow for detectingwhich frequencies of the data are responsible for parameter instability. If the model is locallystable at the frequencies of interest, the whole sample size can be then exploited despite the presence of a break. Two empirical examples illustrate that local instability can concernonly the lower frequencies (decrease in the postwar U.S. productivity) or higher frequencies(change in the U.S. monetary policy in the early 80’s). 
Keywords:  econometrics; 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:dgr:umamet:2005022&r=ecm 
By:  Fernando TUSELL PALMER (Facultad de CC.EE. y Empresariales, Unviersidad del País Vasco.) 
Abstract:  Time series in many areas of application, and notably in the social sciences, are frequently incomplete. This is particularly annoying when we need to have complete data, for instance to compute indexes as a weighted average of values from a number of time series; whenever a single datum is absent, the index cannot be computed. This paper proposes to deal with such situations by creating multiple completed trajectories, drawing on state space modelling of time series, the simulation smoother and multiple imputation ideas. 
Keywords:  multiple imputation;time series analysis; Kalman smooth 
JEL:  C22 C43 
Date:  2005–09–23 
URL:  http://d.repec.org/n?u=RePEc:ehu:biltok:200503&r=ecm 
By:  Gautam Tripathi (University of Connecticut) 
Abstract:  Many datasets used by economists and other social scientists are collected by stratified sampling. The sampling scheme used to collect the data induces a probability distribution on the realized observations that differs from the target or underlying distribution for which inference is to be made. If the distinction between target and realized distributions is not taken into account, statistical inference can be severely biased. This paper shows how to do efficient empirical likelihood based semiparametric inference in moment restriction models when data from the target population is collected by three widely used sampling schemes: variable probability sampling, multinomial sampling, and standard stratified sampling. 
Keywords:  Empirical likelihood, Moment conditions, Stratified sampling. 
JEL:  C14 
Date:  2005–09 
URL:  http://d.repec.org/n?u=RePEc:uct:uconnp:200538&r=ecm 
By:  Post, G.T.; Linton, O.; Whang, YJ (Erasmus Research Institute of Management (ERIM), RSM Erasmus University) 
Abstract:  We propose a new test of the stochastic dominance efficiency of a given portfolio over a class of portfolios. We establish its null and alternative asymptotic properties, and define a method for consistently estimating critical values. We present some numerical evidence that our tests work well in moderate sized samples. 
Keywords:  Stochastic Dominance;Portfolio Diversification;Asset Pricing;Portfolio Analysis; 
Date:  2005–06–28 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureri:30007067&r=ecm 
By:  J.S. Cramer (University of Amsterdam) 
Abstract:  In binary discrete regression models like logit or probit the omis sion of a relevant regressor (even if it is orthogonal) depresses the re maining <font face="Symbol">b</font> coefficients towards zero. For the probit model, Wooldridge (2002) has shown that this bias does not carry over to the effect of the regressor on the outcome. We find by simulations that this also holds for logit models, even when the omitted variable leads to severe misspecification of the disturbance. More simulations show that es timates of these effects by logit analysis are also impervious to pure misspecification of the disturbance. 
Keywords:  logit model; omitted variables; misspecification 
JEL:  C25 
Date:  2005–09–15 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20050084&r=ecm 
By:  Michael Lechner; Ruth Miquel 
Abstract:  This paper approaches the causal analysis of sequences of interventions from a potential outcome perspective. The identifying power of several different assumptions concerning the connection between the dynamic selection process and the outcomes of different sequences is discussed. The assumptions invoke different randomisation assumptions which are compatible with different selection regimes. Parametric forms are not involved. When participation in the sequences is decided every period depending on its success so far, the resulting endogeneity problem destroys nonparametric identification for many parameters of interest. However, some interesting dynamic forms of the average treatment effect are identified. As an empirical example for the application of this approach, we reexamine the effects of training programmes for the unemployed in West Germany. 
JEL:  C21 C31 
Date:  2005–08 
URL:  http://d.repec.org/n?u=RePEc:usg:dp2005:200517&r=ecm 
By:  MEHMET CANER (UNIVERSITY OF PITTSBURGH) 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–09–12 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0509018&r=ecm 
By:  Melenberg,Bertrand; Polbennikov,Simon (Tilburg University, Center for Economic Research) 
Abstract:  Coherent risk measures have received considerable attention in the recent literature. Coherent regular risk measures form an important subclass: they are empirically identifiable, and, when combined with mean return, they are consistent with second order stochastic dominance. As a consequence, these risk measures are natural candidates in a meanrisk tradeoff portfolio choice. In this paper we develop a meancoherent regular risk spanning test and related performance measure. The test and the performance measure can be implemented by means of a simple semiparametric instrumental variable regression, where instruments have a direct link with the stochastic discount factor. We illustrate applications of the spanning test and the performance measure for several coherent regular risk measures, including the well known expected shortfall. 
Keywords:  portfolio choice;coherent risk;spanning test 
JEL:  G11 D81 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200599&r=ecm 
By:  Massimo Guidolin; Allan Timmerman 
Abstract:  We propose a fourstate multivariate regime switching model to capture common latent factors driving shortterm spot and forward rates in the US. For this class of models we develop a flexible approach to combine forecasts of future spot rates with forecasts from alternative sources such as timeseries models or models capturing macroeconomic information. We find strong empirical evidence that accounting for both regimes in interest rate dynamics and combining forecasts from different models helps improve the outofsample forecasting performance for shortterm interest rates in the US. Theoretical restrictions from the expectations hypothesis when imposed on the forecasting model are found to help only at long forecasting horizons. 
Keywords:  Interest rates ; Forecasting 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2005059&r=ecm 
By:  Ali Dib; Mohamed Gammoudi; Kevin Moran 
Abstract:  This paper documents the outofsample forecasting accuracy of the New Keynesian Model for Canada. We repeatedly estimate our variant of the model on a series of rolling subsamples, forecasting outofsample one to eight quarters ahead at each step. We then compare these forecasts to those arising from simple VARs, using econometric tests of forecasting accuracy. Our results show that the forecasting accuracy of the New Keynesian model compares favourably to that of the benchmarks, particularly as the forecasting horizon increases. These results suggest that the model can become a useful forecasting tool for Canadian time series. The principle of parsimony is invoked to explain our findings. 
Keywords:  New Keynesian Model, Forecasting accuracy 
JEL:  C53 E37 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:lvl:lacicr:0527&r=ecm 
By:  Julia Campos; Neil R. Ericsson; David F. Hendry 
Abstract:  This paper discusses the econometric methodology of generaltospecific modeling, in which the modeler simplifies an initially general model that adequately characterizes the empirical evidence within his or her theoretical framework. Central aspects of this approach include the theory of reduction, dynamic specification, model selection procedures, model selection criteria, model comparison, encompassing, computer automation, and empirical implementation. This paper thus reviews the theory of reduction, summarizes the approach of generaltospecific modeling, and discusses the econometrics of model selection, noting that generaltospecific modeling is the practical embodiment of reduction. This paper then summarizes fiftyseven articles key to the development of generaltospecific modeling. 
Keywords:  Econometrics ; Econometric models 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:838&r=ecm 
By:  MEHMET CANER (UNIVERSITY OF PITTSBURGH) 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–09–12 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0509016&r=ecm 
By:  Michael Greenacre 
Abstract:  The generalization of simple (twovariable) correspondence analysis to more than two categorical variables, commonly referred to as multiple correspondence analysis, is neither obvious nor welldefined. We present two alternative ways of generalizing correspondence analysis, one based on the quantification of the variables and intercorrelation relationships, and the other based on the geometric ideas of simple correspondence analysis. We propose a version of multiple correspondence analysis, with adjusted principal inertias, as the method of choice for the geometric definition, since it contains simple correspondence analysis as an exact special case, which is not the situation of the standard generalizations. We also clarify the issue of supplementary point representation and the properties of joint correspondence analysis, a method that visualizes all twoway relationships between the variables. The methodology is illustrated using data on attitudes to science from the International Social Survey Program on Environment in 1993. 
Keywords:  Correspondence analysis, eigendecomposition, joint correspondence analysis, multivariate categorical data, questionnaire data, singular value decomposition 
JEL:  C19 C88 
Date:  2005–09 
URL:  http://d.repec.org/n?u=RePEc:upf:upfgen:883&r=ecm 
By:  MEHMET CANER (UNIVERSITY OF PITTSBURGH) 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–09–12 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0509019&r=ecm 
By:  Joshua Angrist 
Abstract:  Quantitative criminology focuses on straightforward causal questions that are ideally addressed with randomized experiments. In practice, however, traditional randomized trials are difficult to implement in the untidy world of criminal justice. Even when randomized trials are implemented, not everyone is treated as intended and some control subjects may obtain experimental services. Treatments may also be more complicated than a simple yes/no coding can capture. This paper argues that the instrumental variables methods (IV) used by economists to solve omitted variables bias problems in observational studies also solve the major statistical problems that arise in imperfect criminological experiments. In general, IV methods estimate the causal effect of treatment on subjects that are induced to comply with a treatment by virtue of the random assignment of intended treatment. The use of IV in criminology is illustrated through a reanalysis of the Minneapolis Domestic Violence Experiment. 
JEL:  C21 C31 
Date:  2005–09 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberte:0314&r=ecm 
By:  D. O'Neill (Department of Economics, Maynooth, Ireland); Sweetman. O.; Van de gaer D. 
Abstract:  This paper analyzes the consequences of nonclassical measurement error for distributional analysis. We show that for a popular set of distributions negative correlation between the measurement error (u) and the true value (y) may reduce the bias in the estimated distribution at every value of y. For other distributions the impact of nonclassical measurement di¤ers throughout the support of the distribution. We illustrate the practical importance of these results using models of unemployment duration and income. 
Keywords:  Distribution functions,Nonclassical measurement error, 
Date:  2005–02 
URL:  http://d.repec.org/n?u=RePEc:may:mayecw:n1490205&r=ecm 
By:  Giannone, Domenico; Reichlin, Lucrezia; Small, David 
Abstract:  This paper formalizes the process of updating the nowcast and forecast on output and inflation as new releases of data become available. The marginal contribution of a particular release for the value of the signal and its precision is evaluated by computing 'news' on the basis of an evolving conditioning information set. The marginal contribution is then split into what is due to timeliness of information and what is due to economic content. We find that the Federal Reserve Bank of Philadelphia surveys have a large marginal impact on the nowcast of both inflation variables and real variables and this effect is larger than that of the Employment Report. When we control for timeliness of the releases, the effect of hard data becomes sizeable. Prices and quantities affect the precision of the estimates of GDP while inflation is only affected by nominal variables and asset prices. 
Keywords:  factor model; forecasting; large datasets; monetary policy; news; real time data 
JEL:  C33 C53 E52 
Date:  2005–08 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5178&r=ecm 
By:  Fok, D.; Paap, R.; Horv?th, C.; Franses, Ph.H.B.F. (Erasmus Research Institute of Management (ERIM), RSM Erasmus University) 
Abstract:  The authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in errorcorrection format which allows to disentangle the immediate effects from the dynamic effects. In a second level of the model, the immediate price elasticities, the cumulative promotional price elasticity and the longrun regular price elasticity are correlated with various brandspeciffic and categoryspeciffic characteristics. The model is applied to seven years of data on weekly sales of 100 different brands in 25 product categories. We find many significant moderating effects on the elasticity of price promotions. Brands in categories that are characterized by high price differentiation and that constitute a lower share of budget are less sensitive to price discounts. Deep price discounts turn out to increase the immediate price sensitivity of customers. We also find significant effects for the cumulative elasticity. The immediate effect of a regular price change is often close to zero. The longrun effect of such a decrease usually amounts to an increase in sales. This is especially true in categories characterized by a large price dispersion, frequent price promotions and hedonic, nonperishable products. 
Keywords:  Sales;Vector Autoregression;Marketing Mix;Promotional and Regular Price;Short and Longterm Effects;Hierarchical Bayes; 
Date:  2005–09–08 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureri:30007510&r=ecm 