nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒07‒29
eighteen papers chosen by
Sune Karlsson
Orebro University

  1. Specification Tests with Weak and Invalid Instruments By Doko Tchatoka, Firmin Sabro
  2. Identification and estimation of thresholds in the fixed effects ordered logit model By Gregori Baetschmann
  3. The Estimation of Spatial Autoregressive Models with Missing data of the Dependent Variables By Matthias Koch
  4. Testing Tobler's law in spatial panels: a test for spatial dependence robust against common factors By Giovanni Millo
  5. Estimating spatial weighting matrices in cross-regressive models by entropy techniques By Esteban Fernandez-Vazquez
  6. Efficient Maximum Likelihood Estimation of Spatial Autoregressive Models with Normal but Heteroskedastic Disturbances By Takahisa Yokoi
  7. Testing for co-non-linearity By Håvard Hungnes
  8. Econometric Analysis of Present Value Models When the Discount Factor Is near One By Kenneth D. West
  9. On the Validity of Durbin-Wu-Hausman Tests for Assessing Partial Exogeneity Hypotheses with Possibly Weak Instruments By Doko Tchatoka, Firmin
  10. The Timing of Earnings Sampling over the Life-Cycle and IV Identification of the Return to Schooling By Belzil, Christian; Hansen, Jörgen
  11. Testing local versions of correlation coefficients By Stamatis Kalogirou
  12. Forecasting Inflation With a Random Walk By Pablo Pincheira; Carlos Medel
  13. A vector of Dirichlet processes By Fabrizio Leisen; Antonio Lijoi; Dario Spanó
  14. Estimation of power laws for city size data: An optimal methodology By Faustino Prieto; José María Sarabia
  15. Controlling the danger of false discoveries in estimating multiple treatment effects By Dan Wunderli
  16. A SEMI-COMPENSATORY RESIDENTIAL CHOICE MODEL WITH FLEXIBLE ERROR STRUCTURE By Sigal Kaplan; Yoram Shiftan; Shlomo Bekhor
  17. The Long of It: Odds that Investor Sentiment Spuriously Predicts Anomaly Returns By Robert F. Stambaugh; Jianfeng Yu; Yu Yuan
  18. ARMAX(p,r,q) Parameter Identifiability Without Coprimeness By Leon Wegge

  1. By: Doko Tchatoka, Firmin Sabro
    Abstract: We investigate the size of the Durbin-Wu-Hausman tests for exogeneity when instrumental variables violate the strict exogeneity assumption. We show that these tests are severely size distorted even for a small correlation between the structural error and instruments. We then propose a bootstrap procedure for correcting their size. The proposed bootstrap procedure does not require identification assumptions and is also valid even for moderate correlations between the structural error and instruments, so it can be described as robust to both weak and invalid instruments.
    Keywords: Exogeneity tests; weak instruments; instrument endogeneity; bootstrap technique
    JEL: C12 C30 C15 C1
    Date: 2012–07–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:40185&r=ecm
  2. By: Gregori Baetschmann
    Abstract: The paper proposes a new estimator for the fixed effects ordered logit model. In contrast to existing methods, the new procedure allows estimating the thresholds. The empirical relevance and simplicity of implementation is illustrated in an application to the effect of unemployment on life satisfaction.
    Keywords: Ordered response, panel data, correlated heterogeneity, incidental parameters
    JEL: C23 C25 J28 J64
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:046&r=ecm
  3. By: Matthias Koch
    Abstract: This paper focuses on several estimation methods for SAR- models in case of missing observations in the dependent variable. First, we show with an example and then in general, how missing observations can change the model and thus resulting in the failure of the 'traditional' estimation methods. To estimate the SAR- model with missings we propose different estimation methods, like GMM, NLS and OLS. We will suggest to derive some of the estimators based on a model approximation. A Monte Carlo Simulation is conducted to compare the different estimation methods in their diverse numerical and sample size aspects.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa10p173&r=ecm
  4. By: Giovanni Millo
    Abstract: In the spatial econometrics literature, spatial error dependence is characterized by spatial autoregressive processes, which relate every observation in the cross-section to any other with distance-decaying intensity: i.e., dependence obeys Tobler's First Law of Geography ('everything is related to everything else, but near things are more related than distant things'). In the literature on factor models, on the converse, the degree of correlation between cross-sectional units depends only on factor loadings. Standard spatial correlation tests have power against both types of dependence, while the economic meaning of the two can be much different; so it may be useful to devise a test for detecting 'distance-related' dependence in the presence of a 'factor-type' one. Pesaran's CD is a test for global cross-sectional dependence with good properties. The CD(p) variant only takes into account p-th order neighbouring units to test for local cross-sectional dependence. The pattern of CD(p) as p increases can be informative about the type of dependence in the errors, but the test power changes as new pairs of observations are taken into account. I propose a bootstrap test based on the values taken by the CD(p) test under permutations of the neighbourhood matrix, i.e. when 'resampling the neighbours'. I provide Montecarlo evidence of it being able to tell the presence of spatial-type dependence in the errors of a typical spatial panel irrespective of the presence of an unobserved factor structure.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa10p752&r=ecm
  5. By: Esteban Fernandez-Vazquez
    Abstract: The traditional approach to estimate spatial models bases on a preconceived spatial weights matrix to measure spatial interaction among locations. The a priori assumptions used to define this matrix are supposed to be in line with the “true” spatial relationships among the locations of the dataset. Another possibility consists on using some information present on the sample data to specify an empirical matrix of spatial weights. In this paper we propose to estimate spatial cross-regressive models by generalized maximum entropy (GME). This technique allows combing assumptions about the spatial interconnections among the locations studied with information from the sample data. Hence, the spatial component of the model estimated by the techniques proposed is not just preconceived but it allows incorporating empirical information. We compare some traditional methodologies with the proposed GME estimator by means of Monte Carlo simulations in several scenarios and show that the entropy-based estimation techniques can outperform traditional approaches. An empirical case is also studied in order to illustrate the implementation of the proposed techniques for a real-world example.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa10p503&r=ecm
  6. By: Takahisa Yokoi
    Abstract: Likelihood functions of spatial autoregressive models with normal but heteroskedastic disturbances have been already derived [Anselin (1988, ch.6)]. But there is no implementation for maximum likelihood estimation of these likelihood functions in general (heteroskedastic disturbances) cases. This is the reason why less efficient IV-based methods, 'robust 2-SLS' estimation for example, must be applied when disturbance terms may be heteroskedastic. In this paper, we develop a new computer program for maximum likelihood estimation and confirm the efficiency of our estimator in heteroskedastic disturbance cases using Monte Carlo simulations.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa10p536&r=ecm
  7. By: Håvard Hungnes (Statistics Norway)
    Abstract: This article introduces the concept of co-non-linearity. Co-non-linearity is an example of a common feature in time series (Engle and Koziciki, 1993, J. Bus. Econ. Statist.) and an extension of the concept of common nonlinear components (Anderson and Vahid, 1998, J. Econometrics). If some time series follow a non-linear process but there exists a linear relationship between the levels of these series that removes the non-linearity, then this relationship is said to be a co-non-linear relationship. In this article I show how to determine the number of such co-non-linear relationships. Furthermore, I show how to formulate hypothesis tests on the co-non-linear relationships in a full maximum likelihood framework.
    Keywords: Common features; non-linearity; reduced rank regression
    JEL: C32 E43
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:ssb:dispap:699&r=ecm
  8. By: Kenneth D. West
    Abstract: This paper develops asymptotic econometric theory to help understand data generated by a present value model with a discount factor near one. A leading application is to exchange rate models. A key assumption of the asymptotic theory is that the discount factor approaches 1 as the sample size grows. The finite sample approximation implied by the asymptotic theory is quantitatively congruent with modest departures from random walk behavior with imprecise estimation of a well-studied regression relating spot and forward exchange rates.
    JEL: C58 F31 F37 G12 G15 G17
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:18247&r=ecm
  9. By: Doko Tchatoka, Firmin
    Abstract: We investigate the validity of the standard specification tests for assessing the exogeneity of subvectors in the linear IV regression. Our results show that ignoring the endogeneity of the regressors whose exogeneity is not being tested leads to invalid tests (level is not controlled). When the fitted values from the first stage regression of these regressors are used as instruments under the partial null hypothesis of interest, as suggested Hausman and Taylor (1980, 1981), some versions of these tests are invalid when identification is weak and the number of instruments is moderate. However, all tests are overly conservative and have no power when the number of instruments increases, even for moderate identification strength.
    Keywords: Partial exogeneity; sized distortions; weak identification
    JEL: C12 C15 C30 C01
    Date: 2012–07–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:40184&r=ecm
  10. By: Belzil, Christian (Ecole Polytechnique, Paris); Hansen, Jörgen (Concordia University)
    Abstract: We show that within a life-cycle skill accumulation model, IV identification of the return to schooling parameter is either achieved at any point in the life-cycle where the level of skills accumulated beyond school completion for compliers is exactly equal to the post-schooling skill level of non-compliers (the Skill-Equality condition), or when the skill-ratio is equal to the relative population proportions of non-compliers over compliers (the Weighted-Skill-Ratio condition). As a consequence, it is generally impossible to tie IV identification to any specific phase of the life-cycle and there cannot exist a generally acceptable "optimal" age to sample earnings for IV estimation. The practical example developed in the paper shows precisely how an instrument may fulfill identification at a multiplicity of ages, and how different instruments may achieve identification with specific sampling designs and fail to do so with others. Within a life-cycle skill accumulation data generating process, identification of the return to schooling requires not only implicit assumptions about the underlying model, but also assumptions about the validity of the specific age sampling distribution implied by the data.
    Keywords: returns to schooling, instrumental variable methods, dynamic discrete choice, dynamic programming
    JEL: B4 C1 C3
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp6724&r=ecm
  11. By: Stamatis Kalogirou
    Abstract: The aim of this paper is to define and test local versions of standard correlation coefficients in statistical analysis. This research is motivated by the increasing number of applications using local versions of exploratory and explanatory spatial data analysis methods. A common example of the latter is local regression. Methods such as the Geographically Weighted Regression argue that it is necessary to check spatial non-stationarity in the relationships between a geographic phenomenon and its determinants. This is because the response to a stimulus could vary across space. For example the relationship between education level and unemployment could vary across the EU regions. Local regression claims to account for local relationships that may be hidden or missed when a global regression is applied. However, the statistical inference in local regression methods is still an open field for basic research. In this paper a local version of Pearson correlation coefficient is defined and tested in spatial data. By doing this a simple tool for statistical inference is provided assisting a more careful interpretation of the results of a local regression model. Furthermore, this could be a technique for testing the existence of local correlation among two variables representing spatial data in the absence of a global correlation and vice versa. The application of this technique includes pairs of usually correlated variables, such as income and high levels of education as well as not correlated variables.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa10p529&r=ecm
  12. By: Pablo Pincheira; Carlos Medel
    Abstract: The use of different time-series models to generate forecasts is fairly usual in the forecasting literature in general, and in the inflation forecast literature in particular. When the predicted variable is stationary, the use of processes with unit roots may seem counterintuitive. Nevertheless, in this paper we demonstrate that forecasting a stationary variable with driftless unit-root-based forecasts generates bounded Mean Squared Prediction Errors errors at every single horizon. We also show via simulations that persistent stationary processes may be better predicted by unit-root-based forecasts than by forecasts coming from a model that is correctly specified but that is subject to a higher degree of parameter uncertainty. Finally we provide an empirical illustration in the context of CPI inflation forecasts for three industrialized countries.
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:chb:bcchwp:669&r=ecm
  13. By: Fabrizio Leisen; Antonio Lijoi; Dario Spanó
    Abstract: Random probability vectors are of great interest especially in view of their application to statistical inference. Indeed, they can be used for determining the de Finetti mixing measure in the representation of the law of a partially exchangeable array of random elements taking values in a separable and complete metric space. In this paper we describe a construction of a vector of Dirichlet processes based on the normalization of completely random measures that are jointly infinitely divisible. After deducing the form of the Laplace exponent of the vector of the gamma completely random measures, we study some of their distributional properties. Our attention particularly focuses on the dependence structure and the specific partition probability function induced by the proposed vector.
    Keywords: Bayesian inference, Dirichlet process, Gauss hypergeometric function, Multivariate Levy measure, Partial exchangeability, Partition probability function
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws122115&r=ecm
  14. By: Faustino Prieto; José María Sarabia
    Abstract: Power laws appear widely in many branches of economics, geography, demography and other social sciences. In particular, the upper tail of city size distributions appear to follow power laws, as many researchers have shown for different countries and different periods of times. A crucial point in the estimation of these laws is the correct choice of the truncation point. The aim of this paper is to investigate how to choice this truncation point from an optimal point of view. A new methodology based on the Akaike information criterion is proposed. An extensive simulation study is carried out in order to prove the existence of this optimal point, under different assumptions about the underlying population. Several kind of populations are considered, including lognormal and population with heavy tails. Finally, the methodology is used for the optimal estimation of power laws in city size data sets for USA and Spain for several years.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa10p581&r=ecm
  15. By: Dan Wunderli
    Abstract: I expose the risk of false discoveries in the context of multiple treatment effects. A false discovery is a nonexistent effect that is falsely labeled as statistically significant by its individual t-value. Labeling nonexistent effects as statistically significant has wide-ranging academic and policy-related implications, like costly false conclusions from policy evaluations. I eexamine an empirical labor market model by using state-of-the art multiple testing methods and I provide simulation evidence. By merely using individual t-values at conventional significance levels, the risk of labeling probably nonexistent treatment effects as statistically significant is unacceptably high. Individual t-values even label a number of treatment effects as significant, whereas multiple testing indicates false discoveries in these cases. Tests of a joint null hypothesis such as the well-known F-test control the risk of false discoveries only to a limited extent and do not optimally allow for rejecting individual hypotheses. Multiple testing methods control the risk of false discoveries in general while allowing for individual decisions in the sense of rejecting individual hypotheses.
    Keywords: False discoveries, multiple error rates, multiple treatment effects, labor market
    JEL: C12 C14 C21 C31 C41 J08 J64
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:060&r=ecm
  16. By: Sigal Kaplan; Yoram Shiftan; Shlomo Bekhor
    Abstract: Spatial choices entailing many alternatives (e.g., residence, trip destination) are typically represented by compensatory models based on utility maximization with exogenous choice set generation, which might lead to incorrect choice sets and hence to biased demand elasticity estimates. Semi-compensatory models show promise in increasing the accuracy of choice set specification by integrating choice set formation within discrete choice models. These models represent a two-stage process consisting of an elimination-based choice set formation upon satisfying criteria thresholds followed by utility-based choice. However, they are subject to simplifying assumptions that impede their application in urban planning. This paper proposes a novel semi-compensatory model that alleviates the simplifying assumptions concerning (i) the number of alternatives, (ii) the representation of choice set formation, and (iii) the error structure. The proposed semi-compensatory model represents a sequence of choice set formation based on the conjunctive heuristic with correlated thresholds, and utility-based choice accommodating alternatively nested substitution patterns across the alternatives and random taste variation across the population. The proposed model is applied to off-campus rental apartment choice of students. The population sample for model estimation consists of 1,893 residential choices from 631 students, who participated in a stated-preference web-based survey of rental apartment choice. The survey comprised a two-stage choice experiment supplemented by a questionnaire, which elicited socio-economic characteristics, attitudes and preferences. During the experiment, respondents searched an apartment dataset by a list of thresholds for pre-defined criteria and then ranked their three most preferred apartments from the resulting choice set. The survey website seamlessly recorded the chosen apartments and their respective thresholds. Results show (i) the estimated model for a realistic universal realm of 200 alternatives, (ii) the representation of correlated threshold as a function of individual characteristics, and (iii) the feasibility and importance of introducing a flexible error structure into semi-compensatory models.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa10p65&r=ecm
  17. By: Robert F. Stambaugh; Jianfeng Yu; Yu Yuan
    Abstract: Extremely long odds accompany the chance that spurious-regression bias accounts for investor sentiment's observed role in stock-return anomalies. We replace investor sentiment with a simulated persistent series in regressions reported by Stambaugh, Yu and Yuan (2012), who find higher long-short anomaly profits following high sentiment, due entirely to the short leg. Among 200 million simulated regressors, we find none that support those conclusions as strongly as investor sentiment. The key is consistency across anomalies. Obtaining just the predicted signs for the regression coefficients across the 11 anomalies examined in the above study occurs only once for every 43 simulated regressors.
    JEL: C18 G12 G14
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:18231&r=ecm
  18. By: Leon Wegge (Department of Economics, University of California Davis)
    Keywords: Theoretical Econometrics
    Date: 2012–07–25
    URL: http://d.repec.org/n?u=RePEc:cda:wpaper:12-17&r=ecm

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.