
on Econometrics 
By:  Russell Davidson (McGill University); James G. MacKinnon (Queen's University) 
Abstract:  Despite much recent work on the finitesample properties of estimators and tests for linear regression models with a single endogenous regressor and weak instruments, little attention has been paid to tests for overidentifying restrictions in these circumstances. We study asymptotic tests for overidentification in models estimated by instrumental variables and by limitedinformation maximum likelihood. We show that all the test statistics, like the ones used for inference on the regression coefficient, are functions of only six quadratic forms in the two endogenous variables of the model. They are closely related to the wellknown test statistic of Anderson and Rubin. The distributions of the overidentification statistics are shown to have an illdefined limit as the strength of the instruments tends to zero along with a parameter related to the correlation between the disturbances of the two equations of the model. Simulation experiments demonstrate that this makes it impossible to perform reliable inference near the point at which the limit is illdefined. Several bootstrap procedures are proposed. They alleviate the problem and allow reliable inference when the instruments are not too weak. 
Keywords:  Sargan test, Basmann test, AndersonRubin test, weak instruments, bootstrap P value 
JEL:  C10 C12 C15 C30 
Date:  2013–09 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1318&r=ecm 
By:  Emmanuela Bernardini (Banca d'Italia); Gianluca Cubadda (University of Rome "Tor Vergata") 
Abstract:  This paper proposes a strategy to detect and impose reducedrank restrictions in medium vector autoregressive models. In this framework, it is known that Canonical Correlation Analysis (CCA) does not perform well because inversions of large covariance matrices are required. We propose a method that combines the richness of reducedrank regression with the simplicity of naive univariate forecasting methods. In particular, we suggest to use a proper shrinkage estimator of the autocovariance matrices that are involved in the computation of CCA, thus obtaining a method that is asymptotically equivalent to CCA, but it is numerically more stable in finite samples. Simulations and empirical applications document the merits of the proposed approach both in forecasting and in structural analysis. 
Keywords:  Reduced rank regression; vector autoregressive models; shrinkage estimation; macroeconomic forecasting. 
JEL:  C32 
Date:  2013–10–03 
URL:  http://d.repec.org/n?u=RePEc:rtv:ceisrp:289&r=ecm 
By:  Tsunehiro Ishihara (Department of Economics, Hitotsubashi University,); Yasuhiro Omori (Faculty of Economics, University of Tokyo); Manabu Asai (Faculty of Economics, Soka University,) 
Abstract:  ã€€ã€€ A multivariate stochastic volatility model with the dynamic correlation and the cross leverage effect is described and its efficient estimation method using Markov chain Monte Carlo is proposed. The timevarying covariance matrices are guaranteed to be positive denite by using a matrix exponential transformation. Of particular interest is our approach for sampling a set of latent matrix logarithm variables from their con ditional posterior distribution, where we construct the proposal density based on an approximating linear Gaussian state space model. The proposed model and its extend ed models with fattailed error distribution are applied to trivariate returns data (daily stocks, bonds, and exchange rates) of Japan. Further, a model comparison is conducted including constant correlation multivariate stochastic volatility models with leverage. 
Date:  2013–09 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2013cf904&r=ecm 
By:  DANIEL PREVE (City University of Hong Kong); ShuPing XIJIA LIU (Uppsala University) 
Abstract:  In this note we consider certain measure of locationbased estimators (MLBEs) for the slope parameter in a linear regression model with a single stochastic regressor. The medianunbiased MLBEs are interesting as they can be robust to heavytailed samples and, hence, preferable to the ordinary least squares estimator (LSE). Two dierent cases are considered as we investigate the statistical properties of the MLBEs. In the rst case, the regressor and error is assumed to follow a symmetric stable distribution. In the second, other types of regressions, with potentially contaminated errors, are considered. For both cases the consistency and exact nitesample distributions of the MLBEs are established. Some results for the corresponding limiting distributions are also provided. In addition, we illustrate how our results can be extended to include certain heteroskedastic and multiple regressions. Finitesample properties of the MLBEs in comparison to the LSE are investigated in a simulation study 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:skb:wpaper:cofie022013&r=ecm 
By:  Tommaso Proietti (University of Rome "Tor Vergata"); Alessandra Luati (University of Bologna) 
Abstract:  In this chapter we consider a class of parametric spectrum estimators based on a generalized linear model for exponential random variables with power link. The power transformation of the spectrum of a stationary process can be expanded in a Fourier series, with the coefficients representing generalised autocovariances. Direct Whittle estimation of the coefficients is generally unfeasible, as they are subject to constraints (the autocovariances need to be a positive semidefinite sequence). The problem can be overcome by using an ARMA representation for the power transformation of the spectrum. Estimation is carried out by maximising the Whittle likelihood, whereas the selection of a spectral model, as a function of the power transformation parameter and the ARMA orders, can be carried out by information criteria. The proposed methods are applied to the estimation of the inverse autocorrelation function and the related problem of selecting the optimal interpolator, and for the identification of spectral peaks. More generally, they can be applied to spectral estimation with possibly misspecified models. 
Date:  2013–10–03 
URL:  http://d.repec.org/n?u=RePEc:rtv:ceisrp:290&r=ecm 
By:  Angelo Mele (Johns Hopkins University  Carey Business School) 
Abstract:  This paper proposes approximate variational inference methods for estimation of a strategic model of social interactions. Players interact in an exogenous network and sequentially choose a binary action. The utility of an action is a function of the choices of neighbors in the network. I prove that the interaction process can be represented as a potential game and it converges to a unique stationary equilibrium distribution. However, exact inference for this model is infeasible because of a computationally intractable likelihood, which cannot be evaluated even when there are few players. To overcome this problem, I propose variational approximations for the likelihood that allow approximate inference. This technique can be applied to any discrete exponential family, and therefore it is a general tool for inference in models with a large number of players. The methodology is illustrated with several simulated datasets and compared with MCMC methods. 
Keywords:  Variational approximations, Bayesian Estimation, Social Interactions 
JEL:  D85 C13 C73 
Date:  2013–09 
URL:  http://d.repec.org/n?u=RePEc:net:wpaper:1316&r=ecm 
By:  Stephan B. Bruns (Max Planck Institute of Economics, Jena) 
Abstract:  Metaregression models are increasingly utilized to integrate empirical results across studies while controlling for the potential threats of datamining and publication bias. We propose extended metaregression models and evaluate their performance in identifying genuine em pirical effects by means of a comprehensive simulation study for various scenarios that are prevalent in empirical economics. We can show that the metaregression models here pro posed systematically outperform the prior gold standard of metaregression analysis of re gression coefficients. Most metaregression models are robust to the presence of publication bias, but datamining bias leads to seriously inflated type I errors and has to be addressed explicitly. 
Keywords:  Metaregression, metaanalysis, publication bias, data mining, Monte Carlo simulatio 
JEL:  C12 C15 C40 
Date:  2013–09–27 
URL:  http://d.repec.org/n?u=RePEc:jrp:jrpwrp:2013040&r=ecm 
By:  Sukjin Han (Department of Economics, University of Texas at Austin); Edward J. Vytlacil (Department of Economics, New York University) 
Abstract:  This paper provides identification results for a class of models specified by a triangular system of two equations with binary endogenous variables. The joint distribution of the latent error terms is specified through a parametric copula structure, including the normal copula as a special case, while the marginal distributions of the latent error terms are allowed to be arbitrary but known. This class of models includes bivariate probit models as a special case. The paper demonstrates that an exclusion restriction is necessary and sufficient for globally identification of the model parameters with the excluded variable allowed to be binary. Based on this result, identification is achieved in a full model where common exogenous regressors that are present in both equations and excluded instruments are possibly more general than discretely distributed. 
Keywords:  Identification, triangular threshold crossing model, bivariate probit model, endogenous variables, binary response, copula, exclusion restriction 
JEL:  C35 C36 
Date:  2013–09 
URL:  http://d.repec.org/n?u=RePEc:tex:wpaper:130908&r=ecm 
By:  Zinn, Jesse 
Abstract:  The weighted updating model is a generalization of Bayesian updating that allows for biased beliefs by weighting the functions that constitute Bayes' rule with real exponents. I provide an axiomatic basis for this framework and show that weighting a distribution affects the information entropy of the resulting distribution. This result provides the interpretation that weighted updating models biases in which individuals mistake the information content of data. I augment the base model in two ways, allowing it to account for additional biases. The first augmentation allows for discrimination between data. The second allows the weights to vary over time. I also find a set of sufficient conditions for the uniqueness of parameter estimation through maximum likelihood, with logconcavity playing a key role. An application shows that self attribution bias can lead to optimism bias. 
Keywords:  Bayesian Updating, Cognative Biases, Learning, Uncertainty 
JEL:  C02 D03 
Date:  2013–09–30 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:50310&r=ecm 
By:  Ismael MOURIFIÃ‰ 
Abstract:  This paper considers the evaluation of the average treatment effect (ATE) in a triangular system with binary dependent variables. I impose a threshold crossing model on both endogenous regressor and the outcome. No parametric functional form or distributional assumptions are imposed. Shaikh and Vytlacil (2011, SV) proposed bounds on ATE which are sharp only under a restrictive condition on the support of the covariates and the instruments, which rules out a wide range of models and many relevant applications. In some cases, when SV's support condition fails, their bounds retrieve the same empirical content as the model with unrestricted endogenous regressor. In this setting, I provide a methodology which allows to construct sharp bounds on the ATE by efficiently using variation on covariates without imposing support restrictions. 
Keywords:  partial identification, threshold crossing model, triangular system, average treatment effect, endogeneity, program social evaluation. 
JEL:  C14 C31 C35 
Date:  2013–10–01 
URL:  http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa498&r=ecm 
By:  Heckman, James J. (University of Chicago); Pinto, Rodrigo (University of Chicago) 
Abstract:  Haavelmo's seminal 1943 paper is the first rigorous treatment of causality. In it, he distinguished the definition of causal parameters from their identification. He showed that causal parameters are de fined using hypothetical models that assign variation to some of the inputs determining outcomes while holding all other inputs fixed. He thus formalized and made operational Marshall's (1890) ceteris paribus analysis. We embed Haavelmo's framework into the recursive framework of Directed Acyclic Graphs (DAG) used in one influential recent approach to causality (Pearl, 2000) and in the related literature on Bayesian nets (Lauritzen, 1996). We compare an approach based on Haavelmo's methodology with a standard approach in the causal literature of DAGs – the "docalculus" of Pearl (2009). We discuss the limitations of DAGs and in particular of the docalculus of Pearl in securing identification of economic models. We extend our framework to consider models for simultaneous causality, a central contribution of Haavelmo (1944). In general cases, DAGs cannot be used to analyze models for simultaneous causality, but Haavelmo's approach naturally generalizes to cover it. 
Keywords:  causality, identification, docalculus, directed acyclic graphs, simultaneous treatment effects 
JEL:  C10 C18 
Date:  2013–09 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp7628&r=ecm 
By:  Line Elvstrøm Ekner (Department of Economics, Copenhagen University); Emil Nejstgaard (Department of Economics, Copenhagen University) 
Abstract:  We propose a new and simple parametrization of the socalled speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the wellknown identification problem of the speed of transition parameter is that the threshold autoregression (TAR) is a limiting case of the LSTAR process. We demonstrate how this fact impedes numerical optimization of the original parametrization, whereas this is not the case for the new parametrization. Next, we show that information criteria provide a tool to choose between an LSTAR model and a TAR model; a choice previously basedsolely on economic theory. Reestimation of two published applications illustrate the usefulness of our findings.. 
Date:  2013–09–19 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:1307&r=ecm 
By:  Yolanda F. RebolloSanz (Department of Economics, Universidad Pablo de Olavide); Ainara González de San Román (Instituto de Empresa) 
Abstract:  The main contribution of this paper is to provide researchers with a new estimation method suitable for censored models with two high dimensional fixed effects. This new estimation method is based on a sequence of least squares regressions. In practice, use of this method can result in significant savings in computing time, and it is applicable to datasets where the number of fixed effects makes standard estimation techniques unfeasible. In addition, the paper both analyses the theoretical properties of the procedure and evaluates its practical performance by means of a Monte Carlo simulation study. Finally, it describes an application to the Spanish economy using a Spanish longitudinal match employeremployee dataset which provides wage information on the working population over a 13year period. In particular, this paper contributes to the empirical literature on wage determination by providing the first decomposition of individual wages for Spain that takes into account both worker and firm effects after adjusting for censoring. This empirical exercise shows that the biases encountered when censored issues are not taken into account can be of sufficient magnitude as to overestimate the role of firm effects in wage dispersion. In our empirical research, individual heterogeneity explains more than 60% of wage dispersion. 
Keywords:  fixed effects, algorithm, wage decomposition, censoring, simulation, assortative matching 
JEL:  I21 I24 J16 J31 
Date:  2013–09 
URL:  http://d.repec.org/n?u=RePEc:pab:wpaper:13.05&r=ecm 
By:  Vincent BOUCHER; Ismael MOURIFIÃ‰ 
Abstract:  We explore the asymptotic properties of pairwise stables networks (Jackson and Wolinsky, 1996). Specifically, we want to recover a set of parameters from the individuals' utility functions using the observation of a single pairwise stable network. We develop Pseudo Maximum Likelihood estimator and show that it is consistent and asymptotically normally distributed under a very weak version of homophily. The approach is compelling as it provides explicit, easytocheck conditions on the admissible set of preferences. Moreover, the method is easily implementable using preprogrammed estimators available in most statistical packages. We provide an application of our method using the Add Health database. 
Keywords:  social network, pairwise stability, spatial econometrics 
JEL:  C13 D85 
Date:  2013–10–01 
URL:  http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa499&r=ecm 
By:  SteinErik, Fleten; Paraschiv, Florentina; Schürle, Michel 
Abstract:  We propose a novel regimeswitching approach for the simulation of electricity spot prices that is inspired by the class of fundamental models and takes into account the relation between spot and forward prices. Additionally the model is able to reproduce spikes and negative prices. Market prices are derived given an observed forward curve. We distinguish between a base regime and an upper as well as a lower spike regime. The model parameters are calibrated using historical hourly price forward curves for EEX Phelix and the dynamic of hourly spot prices. We further evaluate different time series models such as ARMA and GARCH that are usually applied for modeling electricity prices and conclude a better performance of the proposed regimeswitching model. 
Keywords:  electricity prices, regimeswitching model, negative prices, spikes, price forward curves 
Date:  2013–07 
URL:  http://d.repec.org/n?u=RePEc:usg:sfwpfi:2013:11&r=ecm 
By:  Michael Bleaney; Zhiyong Li 
Abstract:  The performance of bidask spread estimators is investigated using simulation experiments. All estimators are much more accurate if the data are sampled at high frequency. In highfrequency data, the HuangStoll estimator, which requires order flow information, generally outperforms Rolltype estimators based on price information only. The exception is when there is feedback trading (order flows respond to past price movements), when the HuangStoll estimator is seriously biased. When only lowfrequency (e.g. daily) data are available, the CorwinSchultz estimator based on daily high and low prices is usually less inaccurate than the HuangStoll and Roll estimators. An important and empirically relevant exception is when the spread varies within the day; in this case the CorwinSchultz estimator significantly overestimates the true spread. 
Date:  2013–09 
URL:  http://d.repec.org/n?u=RePEc:not:notecp:13/05&r=ecm 