
on Econometrics 
By:  Cizek,Pavel (Tilburg University, Center for Economic Research) 
Abstract:  This paper introduces a new class of regression estimators robust to outliers, measurement errors, and other data irregularities. The estimators are based on the twostep least weighted squares method, where weights are adaptively computed using the empirical distribution function of regression residuals obtained from an initial robust fit. The asymptotic distribution of the proposed estimators is derived under general conditions, allowing for timeseries applications. Further, it is shown that the breakdown point of the proposed estimators equals that of the initial robust estimate. The main contribution of the work is that the proposed twostep procedures combine several desirable properties, which different existing estimators posses separately, but not jointly. These properties are asymptotic efficiency if the errors are normally distributed, high breakdown point achieved without rejecting (trimming) of observations, and independence of auxiliary tuning parameters. A Monte Carlo study shows that the twostep least weighted squares outperform in most situations both least squares and existing robust estimators in finite samples. 
Keywords:  least weighted squares;linear regression;robust statistics;twostep estimation 
JEL:  C13 C20 C21 C22 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:20068&r=ecm 
By:  Kleijnen,Jack P.C. (Tilburg University, Center for Economic Research) 
Abstract:  This tutorial explains the basics of linear regression models. especially loworder polynomials. and the corresponding statistical designs. namely, designs of resolution III, IV, V, and Central Composite Designs (CCDs). This tutorial assumes 'white noise', which means that the residuals of the fitted linear regression model are normally, independently, and identically distributed with zero mean. The tutorial gathers statistical results that are scattered throughout the literature on mathematical statistics, and presents these results in a form that is understandable to simulation analysts. 
Keywords:  metamodels;fractional factorial designs;PlackettBurman designs;factor interactions;validation;crossvalidation 
JEL:  C0 C1 C9 C15 C44 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200610&r=ecm 
By:  Rodney W. Strachan; Herman K. van Dijk 
Abstract:  Economic forecasts and policy decisions are often informed by empirical analysis based on econometric models. However, inference based upon a single model, when several viable models exist, limits its usefulness. Taking account of model uncertainty, a Bayesian model averaging procedure is presented which allows for unconditional inference within the class of vector autoregressive (VAR) processes. Several features of VAR process are investigated. Measures on manifolds are employed in order to elicit uniform priors on subspaces defined by particular structural features of VARs. The features considered are the number and form of the equilibrium economic relations and deterministic processes. Posterior probabilities of these features are used in a model averaging approach for forecasting and impulse response analysis. The methods are applied to investigate stability of the "Great Ratios" in U.S. consumption, investment and income, and the presence and effects of permanent shocks in these series. The results obtained indicate the feasibility of the proposed method. 
Keywords:  Posterior probability; Grassman manifold; Orthogonal group; Cointegration; Model averaging; Stochastic trend; Impulse response; Vector autoregressive model. 
JEL:  C11 C32 C52 
Date:  2006–02 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:06/5&r=ecm 
By:  Giovanni B. Concu (Risk and Sustainable Management Group, University of Queensland) 
Abstract:  This paper describes a Choice Modelling experiment set up to investigate the relationship between distance and willingness to pay for environmental quality changes. The issue is important for the estimation and transfer of benefits. So far the problem has been analysed through the use of Contingent Valuationtype of experiments, producing mixed results. The Choice Modelling experiment allows testing distance effects on parameters of environmental attributes that imply different tradeoffs between use and nonuse values. The sampling procedure is designed to provide a Ògeographically balancedÓ sample. Several specifications of the distance covariate are compared and distance effects are shown to take complex shapes. Welfare analysis also shows that disregarding distance produces underestimation of individual and aggregated benefits and losses, seriously hindering the reliability of costbenefit analyses. 
Keywords:  choice Modelling techniques, distance, aggregation, sampling, functional forms. 
JEL:  Q51 Q58 
Date:  2005–12 
URL:  http://d.repec.org/n?u=RePEc:rsm:murray:m05_7&r=ecm 
By:  Massimo Franchi (Department of Economics, University of Copenhagen) 
Abstract:  We show that the order of integration of a vector autoregressive process is equal to the difference between the multiplicity of the unit root in the characteristic equation and the multiplicity of the unit root in the adjoint matrix polynomial. The equivalence with the standard I(1) and I(2) conditions (Johansen, 1996) is proved and polynomial cointegration discussed in the general setup. 
Keywords:  unit roots; order of integration; polynomial cointegration 
JEL:  C32 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:0605&r=ecm 
By:  Gilmour S.G.; Goos P. 
Abstract:  Splitplot and other multistratum structures are widely used in factorial and response surface experiments and residual maximum likelihood (REML) and generalized least squares (GLS) estimation is seen as the stateoftheart method of data analysis for nonorthogonal designs. We analyze data from an experiment run to study the effects of five process factors on the drying rate for freeze dried coffee and find that the mainplot variance component is estimated to be zero. We show that this is a typical property of REMLGLS estimation which is highly undesirable and can give misleading conclusions. In the classical approach, it is possible to fix the mainplot variance at some positive value, but this is not satisfactory either. Instead, we recommend a Bayesian analysis, using an informative prior distribution for the mainplot variance component and implemented using Markov chain Monte Carlo sampling. Paradoxically, the Bayesian analysis is less dependent on prior assumptions than the REMLGLS analysis. Bayesian analyses of the coffee freeze drying data give more realistic conclusions than REMLGLS analysis, providing support for our recommendation. 
Date:  2005–02 
URL:  http://d.repec.org/n?u=RePEc:ant:wpaper:2006005&r=ecm 