
on Econometrics 
By:  Mueller, Ulrich 
Abstract:  The paper studies the asymptotic efficiency and robustness of hypothesis tests when models of interest are defined in terms of a weak convergence property. The null and local alternatives induce different limiting distributions for a random element, and a test is considered robust if it controls asymptotic size for all data generating processes for which the random element has the null limiting distribution. Under weak regularity conditions, asymptotically robust and efficient tests are then simply given by efficient tests of the limiting problemthat is, with the limiting random element assumed observedevaluated at sample analogues. These tests typically coincide with suitably robustified versions of optimal tests in canonical parametric versions of the model. This paper thus establishes an alternative and broader sense of asymptotic efficiency for many previously derived tests in econometrics, such as tests for unit roots, parameter stability tests and tests about regression coefficients under weak instruments, and it provides a concrete limit on the potential for more powerful tests in less parametric setups. 
Keywords:  hypothesis tests; optimality; robustness; weak convergence 
JEL:  C14 C12 
Date:  2008–03 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:7741&r=ecm 
By:  Russell Davidson (McGill University); James G. MacKinnon (Queen's University) 
Abstract:  We study several tests for the coefficient of the single righthandside endogenous variable in a linear equation estimated by instrumental variables. We show that writing all the test statistics  Student's t, AndersonRubin, the LM statistic of Kleibergen and Moreira (K), and likelihood ratio (LR)  as functions of six random quantities leads to a number of interesting results about the properties of the tests under weakinstrument asymptotics. We then propose several new procedures for bootstrapping the three nonexact test statistics and also a new conditional bootstrap version of the LR test. These use more efficient estimates of the parameters of the reducedform equation than existing procedures. When the best of these new procedures is used, both the K and conditional bootstrap LR tests have excellent performance under the null. However, power considerations suggest that the latter is probably the method of choice. 
Keywords:  bootstrap test, weak instruments, AndersonRubin test, conditional LR test, Wald test, K test 
JEL:  C10 C12 C15 C30 
Date:  2008–03 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1157&r=ecm 
By:  Walter Sosa Escudero (Department of Economics Universidad de San Andrés); Anil K. Bera (Department of Economics University of Illinois) 
Abstract:  This paper derives unbalanced versions of the tests statistics for ¯rst order serial correlation and random individual e®ects summarized in Sosa Escudero and Bera (2001), and updates their xttest1 routine. The derived tests statistics should be useful for applied researchers faced with the increasing availability of panel information where not every individual or country is observed for the full time span. Also, as it was the case of the previously available tests, the test statistics proposed here are based on the OLS residuals, and hence are computationally very simple. 
Keywords:  error components model, unbalanced panel data, testing, misspecification. 
JEL:  C12 C23 C52 
Date:  2008–03 
URL:  http://d.repec.org/n?u=RePEc:dls:wpaper:0065&r=ecm 
By:  Massimiliano Caporin (Università di Padova); Michael McAleer (University of Western Australia) 
Abstract:  DAMGARCH extends the VARMAGARCH model of Ling and McAleer (2003) by introducing multiple thresholds and timedependent structure in the asymmetry of the conditional variances. DAMGARCH models the shocks affecting the conditional variances on the basis of an underlying multivariate distribution. It is possible to model explicitly assetspecific shocks and common innovations by partitioning the multivariate density support. This paper presents the model structure, describes the implementation issues, and provides the conditions for the existence of a unique stationary solution, and for consistency and asymptotic normality of the quasimaximum likelihood estimators. The paper also provides analytical expressions for the news impact surface implied by DAMGARCH and an empirical example. 
Keywords:  multivariate asymmetry, conditional variance, stationarity conditions, asymptotic theory, multivariate news impact curve 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:pad:wpaper:0064&r=ecm 
By:  Adrian R. Pagan (School of Economics, The University of New South Wales); M. Hashem Pesaran (Faculty of Economics, University of Cambridge) 
Abstract:  This paper considers the implications of the permanent/transitory decomposition of shocks for identification of structural models in the general case where the model might contain more than one permanent structural shock. It provides a simple and intuitive generalization of the influential work of Blanchard and Quah (1989), and shows that structural equations with known permanent shocks can not contain error correction terms, thereby freeing up the latter to be used as instruments in estimating their parameters. The approach is illustrated by a reexamination of the identification schemes used by Wickens and Motto (2001), Shapiro and Watson (1988), King, Plosser, Stock, Watson (1991), Gali (1992, 1999) and Fisher (2006). 
Keywords:  Permanent shocks; structural identification; error correction models; ISLM models 
JEL:  C30 C32 E10 
Date:  2008–03 
URL:  http://d.repec.org/n?u=RePEc:swe:wpaper:200804&r=ecm 
By:  Gorodnichenko, Yuriy (University of California, Berkeley) 
Abstract:  At the firm level, revenue and costs are well measured but prices and quantities are not. This paper shows that because of these data limitations estimates of returns to scale at the firm level are for the revenue function, not production function. Given this observation, the paper argues that, under weak assumptions, microlevel estimates of returns to scale are often inconsistent with profit maximization or imply implausibly large profits. The puzzle arises because popular estimators ignore heterogeneity and endogeneity in factor/product prices, assume perfect elasticity of factor supply curves or neglect the restrictions imposed by profit maximization (cost minimization) so that estimators are inconsistent or poorly identified. The paper argues that simple structural estimators can address these problems. Specifically, the paper proposes a fullinformation estimator that models the cost and the revenue functions simultaneously and accounts for unobserved heterogeneity in productivity and factor prices symmetrically. The strength of the proposed estimator is illustrated by Monte Carlo simulations and an empirical application. Finally, the paper discusses a number of implications of estimating revenue functions rather than production functions and demonstrates that the profit share in revenue is a robust nonparametric economic diagnostic for estimates of returns to scale. 
Keywords:  production function, identification, returns to scale, covariance structures 
JEL:  C23 C33 D24 
Date:  2008–02 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp3368&r=ecm 
By:  Gilles Dufrenot (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales  CNRS : UMR6579); Dominique Guegan (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I); Anne PeguinFeissolle (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales  CNRS : UMR6579) 
Abstract:  This paper presents a 2regime SETAR model with different longmemory processes in both regimes. We briefly present the memory properties of this model and propose an estimation method. Such a process is applied to the absolute and squared returns of five stock indices. A comparison with simple FARIMA models is made using some forecastibility criteria. Our empirical results suggest that our model offers an interesting alternative competing framework to describe the persistent dynamics in modeling the returns. 
Keywords:  SETAR  Longmemory  Stock indices  Forecasting 
Date:  2008–03–06 
URL:  http://d.repec.org/n?u=RePEc:hal:papers:halshs00185369_v1&r=ecm 
By:  Riccardo Borgoni; Peter W. F. Smith; Ann M. Berrington 
Abstract:  Simulating the outcome of an intervention is a central problem in many fields as this allows decisionmakers to quantify the effect of any given strategy and, hence, to evaluate different schemes of actions. Simulation is particularly relevant in very large systems where the statistical model involves many variables that, possibly, interact with each other. In this case one usually has a large number of parameters whose interpretation becomes extremely difficult. Furthermore, in a real system, although one may have a unique target variable, there may be a number of variables which might, and often should, be logically considered predictors of the target outcome and, at the same time, responses of other variables of the system. An intervention taking place on a given variable, therefore, may affect the outcome either directly and indirectly though the way in which it affects other variables within the system. Graphical chain models are particularly helpful in depicting all of the paths through which an intervention may affect the final outcome. Furthermore, they identify all of the relevant conditional distributions and therefore they are particularly useful in driving the simulation process. Focussing on binary variables, we propose a method to simulate the effect of an intervention. Our approach, however, can be easily extended to continuous and mixed responses variables. We apply the proposed methodology to assess the effect that a policy intervention may have on poorer health in early adulthood using prospective data provided by the 1970 British Birth Cohort Study (BCS70). 
Keywords:  chain graph, conditional approach, Gibbs Sampling, Simulation of interventions, age at motherhood, mental health 
JEL:  C15 I18 
Date:  2008–02 
URL:  http://d.repec.org/n?u=RePEc:mis:wpaper:20080301&r=ecm 
By:  Juselius, Mikael (University of Helsinki, Department of Economics) 
Abstract:  This paper derives the cointegration spaces that are implied by linear rational expectations models when data are I(1). The cointegration implications are easy to calculate and can be readily applied to test if the models are consistent with the longrun properties of the data. However, the restrictions on cointegration only form a subset of all the crossequation restrictions that the models place on data. The approach is particularly useful in separating potentially dataconsistent models from the remaining models within a large model family. Moreover, the approach provides useful information on the empirical shock structure of the data. 
Keywords:  rational expectations; cointegration 
JEL:  C52 
Date:  2008–03–11 
URL:  http://d.repec.org/n?u=RePEc:hhs:bofrdp:2008_006&r=ecm 
By:  Terje Skjerpen (Statistics Norway) 
Abstract:  Estimation of standard errors of Engel elasticities within the framework of a linear structural model formulated on twowave panel data is considered. The complete demand system is characterized by measurement errors in total expenditure and by latent preference variation. The estimation of the parameters as well as the standard errors of the estimates is based on the assumption that the variables are normally distributed. Considering a concrete case it is demonstrated that normality does not hold as a maintained assumption. In the light of this standard errors are estimated by means of bootstrapping. However, one obtains rather similar estimates of the standard errors of the Engel elasticities no matter whether one sticks to classical normal inference or perform nonparametric bootstrapping. 
Keywords:  Engel elasticities; standard errors; classical normal theory; bootstrapping 
JEL:  C13 C14 C15 C33 D12 
Date:  2008–03 
URL:  http://d.repec.org/n?u=RePEc:ssb:dispap:532&r=ecm 
By:  Buehn, Andreas (Dresden University of Technology); Schneider, Friedrich (University of Linz) 
Abstract:  The analysis of economic loss attributed to the shadow economy has attracted much attention in recent years by both academics and policy makers. Often, multiple indicators multiple causes (MIMIC) models are applied to time series data estimating the size and development of the shadow economy for a particular country. This type of model derives information about the relationship between cause and indicator variables and a latent variable, here the shadow economy, from covariance structures. As most macroeconomic variables do not satisfy stationarity, long run information is lost when employing first differences. Arguably, this shortcoming is rooted in the lack of an appropriate MIMIC model which considers cointegration among variables. This paper develops a MIMIC model which estimates the cointegration equilibrium relationship and the error correction short run dynamics, thereby retaining information for the long run. Using France as our example, we demonstrate that this approach allows researchers to obtain more accurate estimates about the size and development of the shadow economy. 
Keywords:  shadow economy, tax burden, regulation, unemployment, cointegration, error correction models, MIMIC models 
JEL:  O17 O5 D78 H2 H11 H26 
Date:  2008–01 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp3306&r=ecm 
By:  Neil Shephard; Torben G. Andersen 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:sbs:wpsefe:2008fe23&r=ecm 
By:  Rennen, G. (Tilburg University, Center for Economic Research) 
Abstract:  When building a Kriging model, the general intuition is that using more data will always result in a better model. However, we show that when we have a large nonuniform dataset, using a uniform subset can have several advantages. Reducing the time necessary to fit the model, avoiding numerical inaccuracies and improving the robustness with respect to errors in the output data are some aspects which can be improved by using a uniform subset. We furthermore describe several new and current methods for selecting a uniform subset. These methods are tested and compared on several artificial datasets and one real life dataset. The comparison shows how the selected subsets affect different aspects of the resulting Kriging model. As none of the subset selection methods performs best on all criteria, the best method to choose depends on how the different aspects are valued. The comparison made in this paper can be used to facilitate the user in making a good choice. 
Keywords:  Design of computer experiments;dispersion problem;Kriging model;large nonuniform datasets;radial basis functions;robustness;space filling;subset selection;uniformity. 
JEL:  C0 C90 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200826&r=ecm 
By:  Dominique Guegan (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I); Cyril Caillault (FORTIS Investments  Fortis investments) 
Abstract:  Using nonparametric (copulas) and parametric models, we show that the bivariate distribution of an Asian portfolio is not stable along all the period under study. We suggest several dynamic models to compute two market risk measures, the Value at Risk and the Expected Shortfall: the RiskMetric methodology, the Multivariate GARCH models, the Multivariate MarkovSwitching models, the empirical histogram and the dynamic copulas. We discuss the choice of the best method with respect to the policy management of bank supervisors. The copula approach seems to be a good compromise between all these models. It permits taking financial crises into account and obtaining a low capital requirement during the most important crises. 
Keywords:  Value at Risk  Expected Shortfall  Copula  RiskMetrics  Risk management <br />GARCH models  Switching models. 
Date:  2008–03–06 
URL:  http://d.repec.org/n?u=RePEc:hal:papers:halshs00185374_v1&r=ecm 
By:  Guillaume, HORNY 
Abstract:  Cet article est une revue de la littérature où le temps passé dans un état est une variable aléatoire issue d’un mélange continu de distributions. Elle s’est constituée à partir de l’estimation de fonctions de hasards et de méthodes d’approximations d’intégrales. Nous présentons d’abord le modèle de mélange de hasards proprotionnels et ses propriétés. Les conséquences des principaux résultats d’identification sont ensuite discutées. Nous présentons ensuite des méthodes d’estimations paramétriques, semiparamétriques et bayésiennes, ainsi que les méthodes d’optimisation correspondantes. 
Keywords:  modèles de durée, hasards proportionnels, vraisemblance pénalisée, vraisemblance partielle, méthodes bayésiennes 
JEL:  C41 C51 C61 C11 C14 
Date:  2008–02–01 
URL:  http://d.repec.org/n?u=RePEc:ctl:louvec:2007046&r=ecm 