
on Econometrics 
By:  Corsi, Fulvio; Peluso, Stefano; Audrino, Francesco 
Abstract:  Motivated by the need for an unbiased and positivesemidefinite estimator of multivariate realized covariance matrices, we model noisy and asynchronous ultrahighfrequency asset prices in a statespace framework with missing data. We then estimate the covariance matrix of the latent states through a Kalman smoother and Expectation Maximization (KEM) algorithm. In the expectation step, by means of the Kalman filter with missing data, we reconstruct the smoothed and synchronized series of the latent price processes. In the maximization step, we search for covariance matrices that maximize the expected likelihood obtained with the reconstructed price series. Iterating between the two EM steps, we obtain a KEMimproved covariance matrix estimate which is robust to both asynchronicity and microstructure noise, and positivesemidefinite by construction. Extensive Monte Carlo simulations show the superior performance of the KEM estimator over several alternative covariance matrix estimates introduced in the literature. The application of the KEM estimator in practice is illustrated on a 10dimensional US stock data set. 
Keywords:  High frequency data; Realized covariance matrix; Market microstructure noise; Missing data; Kalman filter; EM algorithm; Maximum likelihood 
JEL:  C13 C51 C52 C58 
Date:  2012–01 
URL:  http://d.repec.org/n?u=RePEc:usg:econwp:2012:02&r=ecm 
By:  Yingying Dong (California State University, Irvine); Arthur Lewbel (Boston College); Thomas Tao Yang (Boston College) 
Abstract:  We discuss the relative advantages and disadvantages of four types of convenient estimators of binary choice models when regressors may be endogenous or mismeasured, or when errors are likely to be heteroskedastic. For example, such models arise when treatment is not randomly assigned and outcomes are binary. The estimators we compare are the two stage least squares linear probability model, maximum likelihood estimation, control function estimators, and special regressor methods. We specifically focus on models and associated estimators that are easy to implement. Also, for calculating choice probabilities and regressor marginal effects, we propose the average index function (AIF), which, unlike the average structural function (ASF), is always easy to estimate. 
Keywords:  Binary choice, Binomial Response, Endogeneity, Measurement Error, Heteroskedasticity, discrete endogenous, censored, random coefficients, Identification, Latent Variable Model. 
JEL:  C25 C26 
Date:  2012–02–15 
URL:  http://d.repec.org/n?u=RePEc:boc:bocoec:789&r=ecm 
By:  Gary Koop; Dimitris Korobils 
Abstract:  In this paper we develop methods for estimation and forecasting in large timevarying parameter vector autoregressive models (TVPVARs). To overcome computational constraints with likelihoodbased estimation of large systems, we rely on Kalman filter estimation with forgetting factors. We also draw on ideas from the dynamic model averaging literature and extend the TVPVAR so that its dimension can change over time. A final extension lies in the development of a new method for estimating, in a timevarying manner, the parameter(s) of the shrinkage priors commonlyused with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output, and interest rates demonstrates the feasibility and usefulness of our approach. 
Keywords:  Bayesian VAR; forecasting; timevarying coefficients; statespace model 
JEL:  C11 C52 E27 E37 
Date:  2012–01 
URL:  http://d.repec.org/n?u=RePEc:gla:glaewp:2012_04&r=ecm 
By:  Matthieu Droumaguet; Tomasz Wozniak 
Abstract:  Recent economic developments have shown the importance of spillover and contagion effects in financial markets as well as in macroeconomic reality. Such effects are not limited to relations between the levels of variables but also impact on the volatility and the distributions. We propose a method of testing restrictions for Granger noncausality on all these levels in the framework of Markovswitching Vector Autoregressive Models. The conditions for Granger noncausality for these models were derived by Warne (2000). Due to the nonlinearity of the restrictions, classical tests have limited use. We, therefore, choose a Bayesian approach to testing. The inference consists of a novel Gibbs sampling algorithm for estimation of the restricted models, and of standard methods of computing the Posterior Odds Ratio. The analysis may be applied to financial and macroeconomic time series with complicated properties, such as changes of parameter values over time and heteroskedasticity. 
Keywords:  Granger Causality; Markov Switching Models; Hypothesis Testing; Posterior Odds Ratio; Gibbs Sampling 
JEL:  C11 C12 C32 C53 E32 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2012/06&r=ecm 
By:  Biørn, Erik (Dept. of Economics, University of Oslo) 
Abstract:  The Generalized Method of Moments (GMM) is discussed for handling the joint occurrence of fixed effects and random measurement errors in an autoregressive panel data model. Finite memory of disturbances, latent regressors and measurement errors is assumed. Two specializations of GMM are considered: (i) using instruments (IVs) in levels for a differenced version of the equation, (ii) using IVs in differences for an equation in levels. Index sets for lags and lags are convenient in examining how the potential IV set, satisfying orthogonality and rank conditions, changes when the memory pattern changes. The joint occurrence of measurement errors with long memory may sometimes give an IVset too small to make estimation possible. On the other hand, problems of ‘IV proliferation’ and ‘weak IVs’ may arise unless the timeseries length is small. An application based on data for (logtransformed) capital stock and output from Norwegian manufacturing firms is discussed. Finite sample biases and IV quality are illustrated by Monte Carlo simulations. Overall, with respect to bias and IV strength, GMM inference using the level version of the equation seems superior to inference based on the equation in differences. 
Keywords:  Panel data; Measurement error; Dynamic modeling; ARMA model; GMM; Monte Carlo simulation 
JEL:  C21 C23 C31 C33 C51 E21 
Date:  2012–02–13 
URL:  http://d.repec.org/n?u=RePEc:hhs:osloec:2012_002&r=ecm 
By:  Claudio Pizzi (Department of Economics, University Of Venice Cà Foscari); Francesca Parpinel (Department of Economics, University Of Venice Cà Foscari) 
Abstract:  The wellknown SETAR model introduced by Tong belongs to the wide class of TAR models that may be specified in several different ways. Here we propose to consider the delay parameter as endogenous, that is we make it to depend on both the past value and the specific past regime of the series. In particular we consider a system that switches between two regimes, each of which is a linear autoregressive of order p, with respect of the value assumed by a delayed selfvariable compared with an asymmetric threshold; the peculiarity is that the switching rule also depends on the regime in which the system lies at time td. In this work we consider two identification procedures: the first one follows the classical estimation for SETAR models, the second one proposes to estimate this model using the Particle Swarm Optimization technique. 
Keywords:  Parameter Estimation, Threshold Autoregressive Models, Particle Swarm Optimization. 
JEL:  C13 C32 C51 C63 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ven:wpaper:2011_26&r=ecm 
By:  Stanislav Anatolyev (New Economic School); Renat Khabibullin (Barclays Capital); Artem Prokhorov (Concordia University & CIREQ) 
Abstract:  We propose a new procedure for estimating a dynamic joint distribution of a group of assets in a sequential manner starting from univariate marginals, continuing with pairwise bivariate distributions, then with triplewise trivariate distributions, etc., until the joint distribution for the whole group is constructed. The procedure uses principles and ideas from the copula theory in how at each step to arrive at a higher dimensional distribution utilizing the results from previous steps. The proposed procedure trades the dimensionality of the parameter space for numerous simpler estimations: even though there are more optimization problems to solve, each is of much lower dimension than the joint density estimation problem; in addition, the parameterization tends to be much more exible. The paper demonstrates how to apply the new sequential technique to model a dynamic distribution of five DJIA constituents. 
Keywords:  multivariate distribution, univariate distribution, copula, asset returns 
JEL:  C13 
Date:  2012–03 
URL:  http://d.repec.org/n?u=RePEc:cfr:cefirw:w0167&r=ecm 
By:  Hsu, ChihChiang; Lin, ChangChing; Yin, ShouYung 
Abstract:  This paper develops panel stochastic frontier models with unobserved common correlated effects. The common correlated effects provide a way of modeling crosssectional dependence and represent heterogeneous impacts on individuals resulting from unobserved common shocks. Traditional panel stochastic frontier models do not distinguish between common correlated effects and technical inefficiency. In this paper, we propose a modified maximum likelihood estimator (MLE) that does not require estimating unobserved common correlated effects. We show that the proposed method can control the common correlated effects and obtain consistent estimates of parameters and technical efficiency for the panel stochastic frontier model. Our Monte Carlo simulations show that the modified MLE has satisfactory finite sample properties under a significant degree of crosssectional dependence for relatively small T. The proposed method is also illustrated in applications based on a cross country comparison of the efficiency of banking industries. 
Keywords:  fixed effects; common correlated effects; factor structure; crosssectional dependence; stochastic frontier 
JEL:  C23 
Date:  2012–03 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:37313&r=ecm 
By:  Tatsuya Kubokawa (Faculty of Economics, University of Tokyo); Éric Marcha (Département de mathématiques, Université de Sherbrooke,); William E. Strawderman (Department of Statistics and Biostatistics, Rutgers University,); JeanPhilippe Turcotte (Département de mathématiques, Université de Sherbrooke,) 
Abstract:  This paper is concerned with estimation of a predictive density with parametric constraints under KullbackLeibler loss. When an invariance structure is embed ded in the problem, general and unied conditions for the minimaxity of the best equivariant predictive density estimator are derived. These conditions are applied to check minimaxity in various restricted parameter spaces in location and/or scale families. Further, it is shown that the generalized Bayes estimator against the uni form prior over the restricted space is minimax and dominates the best equivariant estimator in a location family when the parameter is restricted to an interval of the form [a0;1). Similar ndings are obtained for scale parameter families. Finally, the presentation is accompanied by various observations and illustrations, such as normal, exponential location, and gamma model examples. </table> 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2012cf843&r=ecm 
By:  Raffaella Calabrese (Geary Dynamics Lab, Geary Institute, University College Dublin) 
Abstract:  In many settings, the variable of interest is a proportion with high concentration of data at the boundaries. This paper proposes a regression model for a fractional variable with nontrivial probability masses at the extremes. In particular, the dependent variable is assumed to be a mixed random variable, obtained as the mixture of a Bernoulli and a beta random variables. The extreme values of zero and one are modelled by a logistic regression model. The values belonging to the interval (0,1) are assumed beta distributed and their mean and dispersion are jointly modelled by using two link functions. The regression model here proposed accommodates skewness and heteroscedastic errors. Finally, an application to loan recovery process of Italian banks is also provided. 
Keywords:  proportions, mixed random variable, beta regression, skewness, heteroscedasticity 
JEL:  B14 
Date:  2012–03–14 
URL:  http://d.repec.org/n?u=RePEc:ucd:wpaper:201209&r=ecm 
By:  Michele De Nadai (University of Padova); Arthur Lewbel (Boston College) 
Abstract:  Measurement errors are often correlated, as in surveys where respondent's biases or tendencies to err affect multiple reported variables. We extend Schennach (2007) to identify moments of the conditional distribution of a true Y given a true X when both are measured with error, the measurement errors in Y and X are correlated, and the true unknown model of Y given X has nonseparable model errors. We also provide a nonparametric sieve estimator of the model, and apply it to nonparametric Engel curve estimation. Measurement errors are ubiquitous in expenditure data, and in our application measurement errors on the expenditures of a good Y are by construction correlated with measurement errors in total expenditures X, which is a feature of demand data that has been ignored in almost all previous demand applications. 
Keywords:  Engel curve; errorsinvariables model; Fourier transform; generalized function; sieve estimation. 
JEL:  C10 C14 D12 
Date:  2012–01–15 
URL:  http://d.repec.org/n?u=RePEc:boc:bocoec:790&r=ecm 
By:  Manuel Frondel; Colin Vance 
Abstract:  Interaction effects capture the impact of one explanatory variable x1 on the marginal effect of another explanatory variable x2. To explore interaction effects, socalled interaction terms x1x2 are typically included in estimation specifications. While in linear models the effect of a marginal change in the interaction term is equal to the interaction effect, this equality generally does not hold in nonlinear specifi cations (AI, NORTON, 2003). This paper provides for a general derivation of interaction effects in both linear and nonlinear models and calculates the formulae of the interaction effects resulting from HECKMAN’s sample selection model as well as the TwoPart Model, two regression models commonly applied to data with a large fraction of either missing or zero values in the dependent variable, respectively. Drawing on a survey of automobile use from Germany, we argue that while it is important to test for the significance of interaction effects, their size conveys limited substantive content. More meaningful, and also more easy to grasp, are the conditional marginal effects pertaining to two variables that are assumed to interact. 
Keywords:  Truncated regression models; interaction terms 
JEL:  C34 
Date:  2012–01 
URL:  http://d.repec.org/n?u=RePEc:rwi:repape:0309&r=ecm 
By:  Frisén, Marianne (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University) 
Abstract:  Spatial surveillance is a special case of multivariate surveillance. Thus, in this review of spatial outbreak methods, the relation to general multivariate surveillance approaches is discussed. Different outbreak models are needed for different public health applications. We will discuss methods for the detection of: 1) Spatial clusters of increased incidence, 2) Increased incidence at only one (unknown) location, 3) Simultaneous increase at all locations, 4) Outbreaks with a time lag between the onsets in different regions. Spatial outbreaks are characterized by the relation between the times of the onsets of the outbreaks at different locations. The sufficient reduction plays an important role in finding a likelihood ratio method. The change at the outbreak may be a step change from the nonepidemic period to an increased incidence level. However, errors in the estimation of the baseline have great influence and nonparametric methods are of interest. For the seasonal influenza in Sweden the outbreak was characterized by a monotonic increase following the constant nonepidemic level. A semiparametric generalized likelihood ratio surveillance method was used. Appropriate evaluation metrics are important since they should agree with the aim of the application. Evaluation in spatial and other multivariate surveillance requires special concern. 
Keywords:  Monitoring; Influenza; Sufficiency; Semiparametric; Generalized likelihood; Timeliness; Predicted value 
JEL:  C10 
Date:  2012–03–05 
URL:  http://d.repec.org/n?u=RePEc:hhs:gunsru:2012_001&r=ecm 
By:  Luiz Renato Regis de Oliveira Lima; Wagner Piazza Gaglianone 
Abstract:  Decision makers often observe point forecasts of the same variable computed, for instance, by commercial banks, IMF, World Bank, but the econometric models used by such institutions are unknown. This paper shows how to use the information available at point forecasts to compute optimal density forecasts. Our idea builds upon the combination of point forecasts under general loss functions and unknonwn forecast error distributions. We use realtime data to forecast the density of future in‡ation in the U.S. and our results indicate that the proposed method materially improves the realtime accuracy of density forecasts visàvis the ones obtained from the (unknown) individual. econometric models. 
Keywords:  forecast combination,quantile regression,density forecast 
JEL:  C13 C14 C51 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:ppg:ppgewp:5&r=ecm 
By:  Helmut Luetkepohl 
Abstract:  Multivariate simultaneous equations models were used extensively for macroeconometric analysis when Sims (1980) advocated vector autoregressive (VAR) models as alternatives. At that time longer and more frequently observed macroeconomic time series called for models which described the dynamic structure of the variables. VAR models lend themselves for this purpose. They typically treat all variables as a priori endogenous. Thereby they account for Sims’ critique that the exogeneity assumptions for some of the variables in simultaneous equations models are ad hoc and often not backed by fully developed theories. Restrictions, including exogeneity of some of the variables, may be imposed on VAR models based on statistical procedures. VAR models are natural tools for forecasting. Their setup is such that current values of a set of variables are partly explained by past values of the variables involved. They can also be used for economic analysis, however, because they describe the joint generation mechanism of the variables involved. Structural VAR analysis attempts to investigate structural economic hypotheses with the help of VAR models. Impulse response analysis, forecast error variance decompositions, historical decompositions and the analysis of forecast scenarios are the tools which have been proposed for disentangling the relations between the variables in a VAR model. Traditionally VAR models are designed for stationary variables without time trends. Trending behavior can be captured by including deterministic polynomial terms. In the 1980s the discovery of the importance of stochastic trends in economic variables and the development of the concept of cointegration by Granger (1981), Engle and Granger (1987), Johansen (1995) and others have shown that stochastic trends can also be captured by VAR models. If there are trends in some of the variables it may be desirable to separate the longrun relations from the shortrun dynamics of the generation process of a set of variables. Vector error correction models offer a convenient framework for separating longrun and shortrun components of the data generation process (DGP). In the present chapter levels VAR models are considered where cointegration relations are not modelled explicitly although they may be present. Specific issues related to trending variables will be mentioned occasionally throughout the chapter. The advantage of levels VAR models over vector error correction models is that they can also be used when the cointegration structure is unknown. Cointegration analysis and error correction models are discussed specifically in the next chapter. 
JEL:  C32 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2011/30&r=ecm 
By:  Arthur Lewbel (Boston College); Krishna Pendakur (Simon Fraser University) 
Abstract:  We propose a generalization of random coefficients models, in which the regression model is additive (or additive with interactions) rather than linear, and each regressor is multiplied by an unobserved error. We show nonparametric identification of the model. In addition to providing a natural generalization of random coefficients, we provide economic motivations for the model based on demand system estimation. In these applications, the random coefficients can be interpreted as random utility parameters that take the form of Engel scales or Barten scales, which in the past were estimated as deterministic preference heterogeneity or household technology parameters. We apply these results to consumer surplus and related welfare calculations. 
Keywords:  unoberved heterogeneity, nonseparable errors, random utility parameters, random coefficients, equivalence scales, consumer surplus, welfare calculations. 
JEL:  C14 D12 D13 C21 
Date:  2012–02–15 
URL:  http://d.repec.org/n?u=RePEc:boc:bocoec:791&r=ecm 
By:  Josef Teichmann; Mario V. W\"uthrich 
Abstract:  We present an arbitragefree nonparametric yield curve prediction model which takes the full (discretized) yield curve as state variable. We believe that absence of arbitrage is an important model feature in case of highly correlated data, as it is the case for interest rates. Furthermore, the model structure allows to separate clearly the tasks of estimating the volatility structure and of calibrating market prices of risk. The empirical part includes tests on modeling assumptions, back testing and a comparison with the Vasi\v{c}ek short rate model. 
Date:  2012–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1203.2017&r=ecm 
By:  Rothe, Christoph (Toulouse School of Economics) 
Abstract:  This paper proposes a decomposition of the composition effect, i.e. the part of the observed betweengroup difference in the distribution of some economic outcome that can be explained by differences in the distribution of covariates. Our decomposition contains three types of components: (i) the "direct contributions" of each covariate due to betweengroup differences in the respective marginal distributions, (ii) several “two way” and "higher order" interaction effects due to the interplay between two or more covariates' marginal distributions, and (iii) a "dependence effect" accounting for betweengroup differences in dependence patterns among the covariates. Our methods can be used to decompose differences in arbitrary distributional features, like quantiles or inequality measures, and allows for general nonlinear relationships between the outcome and the covariates. It can easily be implemented in practice using standard econometric techniques. An application to wage data from the US illustrates the empirical relevance of the decomposition’s components. 
Keywords:  counterfactual distribution, decomposition methods 
JEL:  C13 C18 C21 J31 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp6397&r=ecm 
By:  Tolga Omay (Cankaya University, Department of International Trade Management); Mubariz Hasanov (Hacettepe University, Department of Economics); Nuri Uçar (Hacettepe University, Department of Economics) 
Abstract:  In this paper, we propose a nonlinear cointegration test for heterogeneous panels where the alternative hypothesis is an exponential smooth transition (ESTAR) model. We apply our tests for investigating cointegration relationship between energy consumption and economic growth for the G7 countries covering the period 19772007. Moreover, we estimate a nonlinear Panel Vector Error Correction Model in order to analyze the direction of the causality between energy consumption and economic growth. By using nonlinear causality tests we analyze the causality relationships in low economic growth and high economic growth regimes. Furthermore, we deal with the cross section dependency problem in both nonlinear panel cointegration test and nonlinear Panel Vector Error Correction Model. 
Keywords:  Nonlinear panel cointegration; nonlinear Panel Vector Error Correction Model; cross section dependency 
JEL:  C12 C22 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:hac:hacwop:20130&r=ecm 
By:  Walter Krämer; Philip Mess 
Abstract:  We extend the well established link between structural change and estimated persistence from GARCH to stochastic volatility (SV) models. Whenever structural changes in some model parameters increase the empirical autocorrelations of the squares of the underlying time series, the persistence in volatility implied by the estimated model parameters follows suit. This explains why stochastic volatility often appears to be more persistent when estimated from a larger sample as then the likelihood increases that there might have been some structural change in between. 
Keywords:  Persistence; stochastic volatility; structural change 
JEL:  C32 C58 
Date:  2012–01 
URL:  http://d.repec.org/n?u=RePEc:rwi:repape:0310&r=ecm 