
on Econometrics 
By:  Hiroyuki Kasahara (Department of Economics, University of British Columbia); Katsumi Shimotsu (Faculty of Economics, University of Tokyo) 
Abstract:  This paper considers likelihoodbased testing of the null hypothesis of <em>m0</em> components against the alternative of <em>m</em>0+1 components in a finite mixture model. The number of components is an important parameter in the applications of finite mixture models. Still, testing the number of components has been a longstanding challenging problem because of its nonregularity. We develop a framework that facilitates the analysis of the likelihood function of finite mixture models and derive the asymptotic distribution of the likelihood ratio test statistic for testing the null hypothesis of <em>m</em>0 components against the alternative of <em>m</em>0+1 components. Furthermore, building on this framework, we propose a likelihoodbased testing procedure of the number of components. The proposed test, extending the EM approach of Li, Chen and Marriott (2009), does not use a penalty term and is implementable even when the likelihood ratio test is difficult to implement because of nonregularity and computational complexity. 
Date:  2012–11 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2012cf867&r=ecm 
By:  Mette Asmild (Warwick Business School, University of Warwick); Jens Leth Hougaard (Institute of Food and Resource Economics, University of Copenhagen); Dorte Kronborg (Department of Finance, Copenhagen Business School) 
Abstract:  In this paper we examine the possibility of using the standard KruskalWallis rank test in order to evaluate whether the distribution of efficiency scores resulting from Data Envelopment Analysis (DEA) is independent of the input (or output) mix. Recently, a general data generating process (DGP) suiting the DEA methodology has been formulated and some asymptotic properties of the DEA estimators have been established. In line with this generally accepted DGP, we formulate a conditional test for the assumption of mix independence. Since the DEA frontier is estimated, many standard assumptions for evaluating the test statistic are violated. Therefore, we propose to explore its statistical properties by the use of simulation studies. The simulations are performed conditional on the observed input mixes. The method, as it is shown here, is applicable when comparing distributions of efficiency scores in two or more groups in models with multiple inputs and one output with constant returns to scale. The approach is illustrated in an empirical case of demolition projects where we reject the assumption of mix independence. This means that it, in this case, is not meaningful to perform a complete ranking of the projects based on their efficiency scores. Thus the example illustrates how common practice can be inappropriate. 
Keywords:  Data Envelopment Analysis (DEA), homogeneous efficiencies, small sample properties, KruskalWallis, ranking, demolition projects 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:foi:msapwp:05_2012&r=ecm 
By:  Carolina Castagnetti (Department of Economics and Management, University of Pavia); Eduardo Rossi (Department of Economics and Management, University of Pavia); Lorenzo Trapani (Cass Business School, City University London) 
Abstract:  This paper develops an estimation and testing framework for a stationary large panel model with observable regressors and unobservable common factors. We allow for slope heterogeneity and for correlation between the common factors and the regressors. We propose a two stage estimation procedure for the unobservable common factors and their loadings, based on applying Pesaran’s (2006) CCE estimator and the Principal Component estimator. We also develop two tests for the null of no factor structure: one for the null that loadings are cross sectionally homogeneous, and one for the null that common factors are homogeneous over time. Our tests are based on using extremes of the estimated loadings and common factors. The test statistics have an asymptotic Gumbel distribution under the null, and have power versus alternatives where only one loading or common factor differs from the others. Monte Carlo evidence shows that the tests have the correct size and good power. 
Keywords:  Large Panels, CCE Estimator, Principal Component Estimator, Testing for Factor Structure, Extreme Value Distribution. 
JEL:  C12 C33 
Date:  2012–09 
URL:  http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0002&r=ecm 
By:  Medel, Carlos A.; Salgado, Sergio C. 
Abstract:  We test two questions: (i) Is the Bayesian Information Criterion (BIC) more parsimonious than Akaike Information Criterion (AIC)?, and (ii) Is BIC better than AIC for forecasting purposes? By using simulated data, we provide statistical inference of both hypotheses individually and then jointly with a multiple hypotheses testing procedure to control better for typeI error. Both testing procedures deliver the same result: The BIC shows an in and outofsample superiority over AIC only in a longsample context. 
Keywords:  AIC; BIC; timeseries models; overfitting; forecast comparison; joint hypothesis testing 
JEL:  C51 C53 C52 C22 
Date:  2012–10–25 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:42235&r=ecm 
By:  Pilar Poncela; Esther Ruiz 
Abstract:  In the context of dynamic factor models (DFM), it is known that, if the crosssectional and time dimensions tend to infinity, the Kalman filter yields consistent smoothed estimates of the underlying factors. When looking at asymptotic properties, the cross sectional dimension needs to increase for the filter or stochastic error uncertainty to decrease while the time dimension needs to increase for the parameter uncertainty to decrease. ln this paper, assuming that the model specification is known, we separate the finite sample contribution of each of both uncertainties to the total uncertainty associated with the estimation of the underlying factors. Assuming that the parameters are known, we show that, as far as the serial dependence of the idiosyncratic noises is not very persistent and regardless of whether their contemporaneous correlations are weak or strong, the filter uncertainty is a nonincreasing function of the crosssectional dimension. Furthermore, in situations of empirical interest, if the crosssectional dimension is beyond a relatively small number, the filter uncertainty only decreases marginally. Assuming weak contemporaneous correlations among the serially uncorrelated idiosyncratic noises, we prove the consistency not only of smooth but also of real time filtered estimates of the underlying factors in a simple case, extending the results to nonstationary DFM. In practice, the model parameters are unknown and have to be estimated, adding further uncertainty to the estimated factors. We use simulations to measure this uncertainty in finite samples and show that, for the sample sizes usually encountered in practice when DFM are fitted to macroeconomic variables, the contribution of the parameter uncertainty can represent a large percentage of the total uncertainty involved in factor extraction. All results are illustrated estimating common factors of simulated time series 
Keywords:  Common factors, Crosssectional dimension, Filter uncertainty, Parameter uncertainty, Steadystate 
Date:  2012–10 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws122317&r=ecm 
By:  Michael W. McCracken; Giorgio Valente 
Abstract:  Economic value calculations are increasingly used to compare the predictive performance of competing models of asset returns. However, they lack a rigorous way to validate their evidence. This paper proposes a new methodology to test whether utility gains accruing to investors using competing predictive models are equal to zero. Monte Carlo evidence indicates that our testing procedure, that can account for estimation error in the asymptotic variance of the test statistic, provides accurately sized and powerful tests in empirically relevant sample sizes. We apply the test statistics proposed in the paper to revisit the predictability of the US equity premium by means of various predictors. 
Keywords:  Forecasting 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2012049&r=ecm 
By:  Steigerwald, Douglas G; Bostwick, Valerie K 
Abstract:  For Markov regimeswitching models, testing for the possible presence ofmore than one regime requires the use of a nonstandard test statistic. Carterand Steigerwald (forthcoming, Journal of Econometric Methods) derive in detailthe analytic steps needed to implement the test ofMarkov regimeswitchingproposed by Cho and White (2007, Econometrica). We summarize the implementationsteps and address the computational issues that arise. A newcommand to compute regimeswitching critical values, rscv, is introduced andpresented in the context of empirical research. 
Keywords:  Econometrics and Quantitative Economics, mixture model, regime switching, multiple equilibria 
Date:  2012–10–20 
URL:  http://d.repec.org/n?u=RePEc:cdl:ucsbec:qt3685g3qr&r=ecm 
By:  J. Martin van Zyl 
Abstract:  Random variables of the generalized Pareto distribution, can be transformed to that of the Pareto distribution. Explicit expressions exist for the maximum likelihood estimators of the parameters of the Pareto distribution. The performance of the estimation of the shape parameter of generalized Pareto distributed using transformed observations, based on the probability weighted method is tested. It was found to improve the performance of the probability weighted estimator and performs good with respect to bias and MSE. 
Date:  2012–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1210.7642&r=ecm 
By:  Naccarato, Alessia; Zurlo, Davide; Pieraccini, Luciano 
Abstract:  Least Orthogonal Distance Estimator (LODE) of Simultaneous Equation Models’ structural parameters is based on minimizing the orthogonal distance between Reduced Form (RF) and the Structural Form (SF) parameters. In this work we propose a new version – with respect to Pieraccini and Naccarato (2008) – of Full Information (FI) LODE based on decomposition of a new structure of the variancecovariance matrix using Singular Value Decomposition (SVD) instead of Spectral Decomposition (SD). In this context Total Least Square is applied. A simulation experiment to compare the performances of the new version of FI LODE with respect to Three Stage Least Square (3SLS) and Full Information Maximum Likelihood (FIML) is presented. Finally a comparison between the FI LODE new and old version together with few words of conclusion conclude the paper. 
Keywords:  Least Orthogonal Distance Estimator; Simultaneous Equation Models; Total Least Square 
JEL:  C51 
Date:  2012–09–17 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:42365&r=ecm 
By:  Daniela Marella; Paola Vicard 
Abstract:  In this paper ObjectOriented Bayesian networks are proposed as a tool to model measurement errors in a categorical variable due to respondent. A mixed measurement error model is presented and an ObjectOriented Bayesian network implementing such a model is introduced. The insertion of evidence represented by the observed value and its propagation throughout the network yields for each unit the probability distribution of the true value given the observed. Two methods are used to predict the individual true value and their performance is evaluated via simulation. 
Keywords:  Bayesian networks, Measurement errors, Respondent Error. 
JEL:  C11 C18 C80 C83 
Date:  2012–11 
URL:  http://d.repec.org/n?u=RePEc:rtr:wpaper:0167&r=ecm 
By:  Reza C. Daniels (SALDRU, School of Economics, University of Cape Town) 
Abstract:  his paper is concerned with conducting univariate multiple imputation for employee income data that is comprised of continuously distributed observations, observations that are bounded by consecutive income brackets, and observations that are missing. A variable with this mixture of data types is a form of coarsening in the data. An intervalcensored regression imputation procedure is utilised to generate plausible draws for the bounded and nonresponse subsets of income. We test the sensitivity of results to misspecification in the prediction equations of the imputation algorithm, and we test the stability of the results as the number of imputations increase from two to five to twenty. We find that for missing data, imputed draws are very different for respondents who state that they don't know their income compared to those who refuse. The upper tail of the income distribution is most sensitive to misspecification in the imputation algorithm, and we discuss how best to conduct multiple imputation to take this into account. Lastly, stability in parameter estimates of the income distribution is achieved with as little as two multiple imputations, due largely to (a) the small fraction of missing data, in combination with (b) reduced within and betweenimputation components of variance for imputed draws of the bracketed income subset, a function of the defined lower and upper bounds of the brackets that restrict the range of plausibility for imputed draws. This is a joint SALDRU and DataFirst working paper 
Keywords:  Multiple Imputation, Coarse Data, Income Distribution 
JEL:  C15 C83 D31 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:ldr:wpaper:88&r=ecm 
By:  Aslanidis, Nektarios; Martínez Ibáñez, Óscar 
Abstract:  In this paper we propose a parsimonious regimeswitching approach to model the correlations between assets, the threshold conditional correlation (TCC) model. This method allows the dynamics of the correlations to change from one state (or regime) to another as a function of observable transition variables. Our model is similar in spirit to Silvennoinen and Teräsvirta (2009) and Pelletier (2006) but with the appealing feature that it does not suffer from the course of dimensionality. In particular, estimation of the parameters of the TCC involves a simple grid search procedure. In addition, it is easy to guarantee a positive definite correlation matrix because the TCC estimator is given by the sample correlation matrix, which is positive definite by construction. The methodology is illustrated by evaluating the behaviour of international equities, govenrment bonds and major exchange rates, first separately and then jointly. We also test and allow for different parts in the correlation matrix to be governed by different transition variables. For this, we estimate a multithreshold TCC specification. Further, we evaluate the economic performance of the TCC model against a constant conditional correlation (CCC) estimator using a DieboldMariano type test. We conclude that threshold correlation modelling gives rise to a significant reduction in portfolio´s variance. 
Keywords:  Models matemàtics, 33  Economia, 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:urv:wpaper:2072/203167&r=ecm 
By:  Eric R. Sims (Department of Economics, University of Notre Dame) 
Abstract:  A state space representation of a linearized DSGE model implies a VAR in terms of observable variables. The model is said be noninvertible if there exists no linear rotation of the VAR innovations which can recover the economic shocks. Noninvertibility arises when the observed variables fail to perfectly reveal the state variables of the model. The imperfect observation of the state drives a wedge between the VAR innovations and the deep shocks, potentially invalidating conclusions drawn from structural impulse response analysis in the VAR. The principal contribution of this paper is to show that noninvertibility should not be thought of as an ``either/or'' proposition even when a model has a noninvertibility, the wedge between VAR innovations and economic shocks may be small, and structural VARs may nonetheless perform reliably. As an increasingly popular example, socalled ``news shocks'' generate foresight about changes in future fundamentals such as productivity, taxes, or government spending and lead to an unassailable missing state variable problem and hence noninvertible VAR representatations. Simulation evidence from a medium scale DSGE model augmented with news shocks about future productivity reveals that structural VAR methods often perform well in practice, in spite of a known noninvertibility. Impulse responses obtained from VARs closely correspond to the theoretical responses from the model, and the estimated VAR responses are successful in discriminating between alternative, nested specifications of the underlying DSGE model. Since the noninvertibility problem is, at its core, one of missing information, conditioning on more information, for example through factor augmented VARs, is shown to either ameliorate oreliminate invertibility problems altogether. 
Keywords:  DSGE, VAR, News shocks 
JEL:  E C2 
Date:  2012–06 
URL:  http://d.repec.org/n?u=RePEc:nod:wpaper:013&r=ecm 
By:  Stéphane Goutte (LPMA  Laboratoire de Probabilités et Modèles Aléatoires  CNRS : UMR7599  Université Paris VI  Pierre et Marie Curie  Université Paris VII  Paris Diderot) 
Abstract:  In this paper we discuss the calibration issues of regime switching models built on meanreverting and local volatility processes combined with two Markov regime switch ing processes. In fact, the volatility structure of this model depends on a first exogenous Markov chain whereas the drift structure depends on a conditional Markov chain with re spect to the first one. The structure is also assumed to be Markovian and both structure and regime are unobserved. Regarding this construction, we extend the classical Expectation Maximization (EM) algorithm to be applied to our regime switching model. We apply it to economic datas (EuroDollars foreign exchange rate and Brent oil price) to show that this modelling well identifies both mean reverting and volatility regimes switches. More over, it allows us to give economic interpretations of this regime classification such as some financial crisis or some economic policies. 
Keywords:  Markov regime switching; ExpectationMaximization algorithm; meanreverting; local volatility; economics data. 
Date:  2012–10–31 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal00747479&r=ecm 
By:  Tomás del Barrio Castro; Denise R. Osborn; A.M. Robert Taylor 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:man:sespap:1228&r=ecm 
By:  J.M.C. Santos Silva; Silvana Tenreyro; Kehai Wei 
Abstract:  Understanding and quantifying the determinants of the number of sectors or firms exporting in a given country is of relevance for the assessment of trade policies. Estimation of models for the number of sectors, however, poses a challenge because the dependent variable has both a lower and upper bound, implying that the partial effects of the explanatory variables on the conditional mean of the dependent variable cannot be constant and must approach zero as the dependent variable approaches the bounds. We argue that ignoring these bounds by using OLS or countdata models that ignore the upper bound can lead to erroneous conclusions due to the model's misspecification. We propose a flexible specification that accounts for the doublybounded nature of the dependent variable. We empirically investigate the problem and the proposed solution, and find significant differences between estimates obtained with the proposed estimator and those obtained with standard approaches. 
Date:  2012–10–31 
URL:  http://d.repec.org/n?u=RePEc:esx:essedp:721&r=ecm 
By:  Meulders, Michel (HUBrussel, KU Leuven); Tuerlinckx, Francis (KU Leuven); Vanpaemel, Wolf (KU Leuven) 
Abstract:  Probabilistic feature models (PFMs) can be used to explain binary rater judgements about the associations between two types of elements (e.g., objects and attributes) on the basis of binary latent features. In particular, to explain observed objectattribute associations PFMs assume that respondents classify both objects and attributes with re spect to a, usually small, number of binary latent features, and that the observed object attribute association is derived as a specific mapping of these classifications. Standard PFMs assume that the objectattribute association probability is the same according to all respondents, and that all observations are statistically independent. As both assump tions may be unrealistic, a multilevel latent class extension of PFMs is proposed which allows objects and/or attribute parameters to be different across latent rater classes, and which allows to model dependencies between associations with a common object (attribute) by assuming that the link between features and objects (attributes) is fixed across judgements. Formal relationships with existing multilevel latent class models for binary threeway data are described. As an illustration, the models are used to study rater differences in product perception and to investigate individual differences in the situational determinants of angerrelated behavior. 
Keywords:  multilevel latent class model, latent feature, threeway data 
Date:  2012–10 
URL:  http://d.repec.org/n?u=RePEc:hub:wpecon:201238&r=ecm 
By:  Emma BerenguerCarceles (Department of Finance and Accounting, Universidad Pablo de Olavide); Ricardo Gimeno (European Central Bank, Department of Capital Markets and Financial Structures); Juan M. Nave (Department of Finance and Accounting, Universidad de CastillaLa Mancha) 
Abstract:  In macroeconomics and finance, it is extremely useful to have knowledge of the Term Structure of Interest Rates (TSIR) and to be able to interpret the related data. However, independently of its latest particular application, the TSIR is not observable directly in the market and a previous estimation of the yield curve is needed. There are two distinct approaches to modelling the term structure of interest rates. The first is essenciallly to measure the term structure using statistical techniques by appliyng interpolation or curvefitting methods to construct yields. The second approach is based on models, known as dynamic models, which make explicit assumptions about the evolution of state variables and asset pricing models using either equilibrium or arbitrage arguments. There is no consensus on any particular methodology and the choice between alternative curve models is, in part, subjective. Nevertheless, the interpolation or curvefitting methods have showed good properties and are those used nowadays by the vast majority of central banks. The objective of this article is to make a concise study to enhance knowledge of the Term Structure of Interest Rates and of the principal methodologies based on statistical techniques that have been proposed for its correct estimation. The most relevant empirical work on comparing the principal methodologies is also presented. A better understanding of the principal methodologies and their limitations will provide the researchers a panoramic vision of the problematic of estimation of the yield curve and will enable them to chose the best model according to their objectives. 
Keywords:  Term Structure of Interest Rates; estimation methodologies. 
Date:  2012–10 
URL:  http://d.repec.org/n?u=RePEc:pab:fiecac:12.06&r=ecm 
By:  Joan del castillo; Jalila Daoudi; Isabel Serra 
Abstract:  In this article we show the relationship between the Pareto distribution and the gamma distribution. This shows that the second one, appropriately extended, explains some anomalies that arise in the practical use of extreme value theory. The results are useful to certain phenomena that are fitted by the Pareto distribution but, at the same time, they present a deviation from this law for very large values. Two examples of data analysis with the new model are provided. The first one is on the influence of climate variability on the occurrence of tropical cyclones. The second one on the analysis of aggregate loss distributions associated to operational risk management. 
Date:  2012–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1211.0130&r=ecm 