nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒09‒13
fourteen papers chosen by
Sune Karlsson
Orebro University

  1. Goodness-of-fit Test for Specification of Semiparametric Copula Dependence Models By Shulin Zhang,; Ostap Okhrin,; Qian M. Zhou; Peter X.-K. Song
  2. Large sample properties of the matrix exponential spatial specification with an application to FDI By Nicolas Debarsy; Fei Jin; Lung-Fei Lee
  3. Generalised instrumental variable models By Andrew Chesher; Adam Rosen
  4. Large panel data models with cross-sectional dependence: a survey By Alexander Chudik; M. Hashem Pesaran
  5. Covariates and causal effects: the problem of context By Dionissi Aliprantis
  6. Segmentation procedure based on Fisher's exact test and its application to foreign exchange rates By Aki-Hiro Sato; Hideki Takayasu
  7. How wrong can you be, without noticing? Further evidence on speci…cation errors in the Conditional Logit By Tomás del Barrio Casto; William Nilsson; Andrés J. Picazo-Tadeo
  8. Identification Robust Inference with Singular Variance By Nicky Grant
  9. Accounting for uncertainty in willingness to pay for environmental benefits By Daziano, Ricardo A.; Achtnicht, Martin
  10. End of sample vs. real time data: perspectives for analysis of expectations By Emilia Tomczyk
  11. Inflation fan charts and different dimensions of uncertainty. What if macroeconomic uncertainty is high? By Halina Kowalczyk
  12. Modeling the impact of forecast-based regime switches on macroeconomic time series By Bel, K.; Paap, R.
  13. Evaluation of development programs : randomized controlled trials or regressions ? By Elbers, Chris; Gunning, Jan Willem
  14. The Economic Valuation of Variance Forecasts: An Artificial Option Market Approach By Radovan Parrák

  1. By: Shulin Zhang,; Ostap Okhrin,; Qian M. Zhou; Peter X.-K. Song
    Abstract: This paper concerns goodness-of-fit test for semiparametric copula models. Our contribution is two-fold: we first propose a new test constructed via the comparison between "in-sample" and "out-of-sample" pseudolikelihoods, which avoids the use of any probability integral transformations. Under the null hypothesis that the copula model is correctly specified, we show that the proposed test statistic converges in probability to a constant equal to the dimension of the parameter space and establish the asymptotic normality for the test. Second, we introduce a hybrid mechanism to combine several test statistics, so that the resulting test will make a desirable test power among the involved tests. This hybrid method is particularly appealing when there exists no single dominant optimal test. We conduct comprehensive simulation experiments to compare the proposed new test and hybrid approach with the best "blank test" shown in Genest et al. (2009). For illustration, we apply the proposed tests to analyze three real datasets.
    Keywords: hybrid test; in-and-out-of sample likelihood; power; tail dependence.
    JEL: C12 C22 C32 C52 G15
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2013-041&r=ecm
  2. By: Nicolas Debarsy (LEO - Laboratoire d'économie d'Orleans - CNRS : UMR7322 - Université d'Orléans, CERPE - Centre de recherche en Economie Régionale et Politique Economique - Facultés Universitaires Notre Dame de la Paix (FUNDP) - Namur); Fei Jin (SUFE - School of Economics - Shanghai University of Finance and Economics); Lung-Fei Lee (Department of Economics - Ohio State University - Ohio State University)
    Abstract: This paper considers the large sample properties of the matrix exponential spatial specification (MESS) and compares its properties with those of the spatial autoregressive (SAR) model. We find that the quasi-maximum likelihood estimator (QMLE) for the MESS is consistent under heteroskedasticity, a property not shared by the QMLE of the SAR model. For the MESS in both homoskedastic and heteroskedastic cases, consistency is proved and asymptotic distributions are derived. We also consider properties of the generalized method of moments estimator (GMME). In the homoskedastic case, we derive a best GMME that is as efficient as the maximum likelihood estimator under normality and can be asymptotically more efficient than the QMLE under non-normality. In the heteroskedastic case, an optimal GMME can be more efficient than the QMLE asymptotically and the possible best GMME is also discussed. For the general model that has MESS in both the dependent variable and disturbances, labeled MESS(1,1), the QMLE can be consistent under unknown heteroskedasticity when the spatial weights matrices in the two MESS processes are commutative. Also, properties of the QMLE and GMME for the general model are considered. The QML approach for the MESS model has the computational advantage over that of a SAR model. The computational simplicity carries over to MESS models with any finite order of spatial matrices. No parameter range needs to be imposed in order for the model to be stable. Furthermore, the Delta method is used to derive test statistics for the impacts of exogenous variables on the dependent variable. Results of Monte Carlo experiments for finite sample properties of the estimators are reported. Finally, the MESS(1,1) is applied to Belgium's outward FDI data and we observe that the dominant motivation of Belgium's outward FDI lies in finding cheaper factor inputs.
    Keywords: Spatial autocorrelation ; MESS ; QML ; GMM ; Heteroskedasticity ; Delta method ; FDI
    Date: 2013–09–04
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00858174&r=ecm
  3. By: Andrew Chesher (Institute for Fiscal Studies and University College London); Adam Rosen (Institute for Fiscal Studies and University College London)
    Abstract: The ability to allow for flexible forms of unobserved heterogeneity is an essential ingredient in modern microeconometrics. In this paper we extend the application of instrumental variable (IV) models to a wide class of problems in which multiple values of unobservable variables can be associated with particular combinations of observed endogenous and exogenous variables. In our Generalised Instrumental Variable (GIV) models, in contrast to traditional IV models, the mapping from unobserved heterogeneity to endogenous variables need not admit a unique inverse. The class of GIV models allows unobservables to be multivariate and to enter nonseparably into the determination of endogenous variables, thereby removing strong practical limitations on the role of unobserved heterogeneity. Important examples include models with discrete or mixed continuous/discrete outcomes and continuous unobservables, and models with excess heterogeneity where many combinations of different values of multiple unobserved variables, such as random coefficients, can deliver the same realisations of outcomes. We use tools from random set theory to study identification in such models and provide a sharp characterisation of the identified set of structures admitted. We demonstrate the application of our analysis to a continuous outcome model with an interval-censored endogenous explanatory variable.
    Keywords: instrumental variables, endogeneity, excess heterogeneity, limited information, set identification, partial identification, random sets, incomplete models
    JEL: C10 C14 C24 C26
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:43/13&r=ecm
  4. By: Alexander Chudik; M. Hashem Pesaran
    Abstract: This paper provides an overview of the recent literature on estimation and inference in large panel data models with cross-sectional dependence. It reviews panel data models with strictly exogenous regressors as well as dynamic models with weakly exogenous regressors. The paper begins with a review of the concepts of weak and strong cross-sectional dependence, and discusses the exponent of cross-sectional dependence that characterizes the different degrees of cross-sectional dependence. It considers a number of alternative estimators for static and dynamic panel data models, distinguishing between factor and spatial models of cross-sectional dependence. The paper also provides an overview of tests of independence and weak cross-sectional dependence.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:fip:feddgw:153&r=ecm
  5. By: Dionissi Aliprantis
    Abstract: This paper is concerned with understanding how causal effects can be identified in past data and then used to predict the future in light of the problem of context, or the fact that treatment always influences the outcome variable in combination with covariates. Structuralist and experimentalist views of econometric methodology can be reconciled by adopting notation capable of distinguishing between effects independent of and dependent on context, or direct and net effects. By showing that identification of direct and net effects imposes distinct assumptions on selection into covariates (i.e., exclusion restrictions) and explicitly constructing predictions based on past effects, the paper is able to characterize the tradeoff researchers face. Relative to direct effects, net effects can be identified in the past from more general data-generating processes (DGPs), but they can predict the future of less general DGPs. Predicting the future with either type of effect requires knowledge of direct effects. To highlight implications for applied work, I discuss why Local Average Treatment Effects and Marginal Treatment Effects of educational attainment are net effects and are therefore difficult to interpret, even when identified with a perfectly randomized treatment.
    Keywords: Statistical methods ; Econometric models
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:fip:fedcwp:1310&r=ecm
  6. By: Aki-Hiro Sato; Hideki Takayasu
    Abstract: This study proposes the segmentation procedure of univariate time series based on Fisher's exact test. We show that an adequate change point can be detected as the minimum value of p-value. It is shown that the proposed procedure can detect change points for an artificial time series. We apply the proposed method to find segments of the foreign exchange rates recursively. It is also applied to randomly shuffled time series. It concludes that the randomly shuffled data can be used as a level to determine the null hypothesis.
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1309.0602&r=ecm
  7. By: Tomás del Barrio Casto (University of the Balearic Islands, Palma de Mallorca); William Nilsson (University of the Balearic Islands, Palma de Mallorca); Andrés J. Picazo-Tadeo (University of Valencia)
    Abstract: Discrete choice models such as the conditional logit model are widely used tools in applied econometrics and, particularly, in the …eld of environmental valuation and welfare measurement in order to provide policymakers with sound information for making strategic choices. Monte Carlo simulations are used in this study to analyze biases due to omitted relevant variables and functional form misspeci…cation in the conditional logit model. Using an easy-to-estimate speci…cation test is effective to reduce the risks for large biases. One somewhat discouraging result is, however, that a moderate bias can be found even when the omitted variable is orthogonal to the explanatory variables included. This result is particularly interesting considering the increasing interest in using randomized experiments to obtain causal interpretations of key parameters. Randomization, with independence between included and omitted variables, does not guarantee unbiased estimates in the conditional logit model.
    Keywords: Environmental valuation, Welfare measurements, Choice experiments, Monte Carlo analysis, Speci…cation tests
    JEL: C51 D69 C99 C15
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:eec:wpaper:1318&r=ecm
  8. By: Nicky Grant
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:man:sespap:1315&r=ecm
  9. By: Daziano, Ricardo A.; Achtnicht, Martin
    Abstract: Previous literature on the distribution of willingness to pay has focused on its heterogeneity distribution without addressing exact interval estimation. In this paper we derive and analyze Bayesian confidence sets for quantifying uncertainty in the determination of willingness to pay for carbon dioxide abatement. We use two empirical case studies: household decisions of energy-efficient heating versus insulation, and purchase decisions of ultralow-emission vehicles. We first show that deriving credible sets using the posterior distribution of the willingness to pay is straightforward in the case of deterministic consumer heterogeneity. However, when using individual estimates, which is the case for the random parameters of the mixed logit model, it is complex to define the distribution of interest for the interval estimation problem. This latter problem is actually more involved than determining the moments of the heterogeneity distribution of the willingness to pay using frequentist econometrics. A solution that we propose is to derive and then summarize the distribution of point estimates of the individual willingness to pay under different loss functions. --
    Keywords: Discrete Choice Models,Willingness to Pay,Credible Sets
    JEL: C25 D12 Q51
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:zbw:zewdip:13059&r=ecm
  10. By: Emilia Tomczyk (Warsaw School of Economics)
    Abstract: Data revision is defined as an adjustment published after the initial announcement had been made; it may reflect rectification of errors, availability of new information, etc. When economists use a database, they may not even be aware that some of the values have been revised, perhaps repeatedly, and corrected numbers may significantly differ from original ones. I propose to test whether including information on data revisions helps to model properties of expectations, improve quantification procedures, or adjust tests of rationality to data vintage. This paper presents review of literature and databases available for the purposes of real time analysis, and offers an introduction to empirical analysis of influence of data vintage on tests of expectations
    Keywords: end of sample (EOS) data, real time (RTV) data, data revisions, economic databases, expectations
    JEL: C82 D84
    Date: 2013–01–13
    URL: http://d.repec.org/n?u=RePEc:wse:wpaper:68&r=ecm
  11. By: Halina Kowalczyk (National Bank of Poland, Economic Institute)
    Abstract: The paper discusses problems associated with communicating uncertainty by means of ‘fan charts’, used in many central banks for presenting density forecasts of inflation and other macroeconomic variables. Limitations of fan charts in the case of high macroeconomic uncertainty are shown. Issues related to definition of uncertainty are addressed, stressing the need to distinguish between statistical model errors and uncertainty due to lack of knowledge. Modifications of the standard methods of constructing fan charts are suggested. The proposed approach is based on t wo d istributions, o ne of w hich is s ubjective and describes possible macroeconomic scenarios, while the other describes model errors. Total uncertainty is represented as a mixture distribution or density convolution. The proposed approach, although it is a mix of judgment and statistics, allows preserving information about scenarios and separating in the analysis different types of uncertainties.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:nbp:nbpmis:157&r=ecm
  12. By: Bel, K.; Paap, R.
    Abstract: Forecasts of key macroeconomic variables may lead to policy changes of governments, central banks and other economic agents. Policy changes in turn lead to structural changes in macroeconomic time series models. To describe this phenomenon we introduce a logistic smooth transition autoregressive model where the regime switches depend on the forecast of the time series of interest. This forecast can either be an exogenous expert forecast or an endogenous forecast generated by the model. Results of an application of the model to US inflation shows that (i) forecasts lead to regime changes and have an impact on the level of inflation; (ii) a relatively large forecast results in actions which in the end lower the inflation rate; (iii) a counterfactual scenario where forecasts during the oil crises in the 1970s are assumed to be correct leads to lower inflation than observed.
    Keywords: forecasting;nonlinear time series;inflation;regime switching
    Date: 2013–08–08
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765040884&r=ecm
  13. By: Elbers, Chris; Gunning, Jan Willem
    Abstract: Can project evaluation methods be used to evaluate programs: complex interventions involving multiple activities? A program evaluation cannot be based simply on separate evaluations of its components if interactions between the activities are important. In this paper a measure is proposed, the total program effect (TPE), which is an extension of the average treatment effect on the treated (ATET). It explicitly takes into account that in the real world (with heterogeneous treatment effects) individual treatment effects and program assignment are often correlated. The TPE can also deal with the common situation in which such a correlation is the result of decisions on (intended) program participation not being taken centrally. In this context RCTs are less suitable even for the simplest interventions. The TPE can be estimated by applying regression techniques to observational data from a representative sample from the targeted population. The approach is illustrated with an evaluation of a health insurance program in Vietnam.
    Keywords: Poverty Monitoring&Analysis,Health Monitoring&Evaluation,Science Education,Scientific Research&Science Parks,Statistical&Mathematical Sciences
    Date: 2013–09–01
    URL: http://d.repec.org/n?u=RePEc:wbk:wbrwps:6587&r=ecm
  14. By: Radovan Parrák (Institute of Economic Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic)
    Abstract: In this paper we compared two distinct volatility forecasting approaches. GARCH models were contrasted to the models which modelled proxies of volatility directly. More precisely, focus was put on the economic valuation of forecasting accuracy of one-day-ahead volatility forecasts. Profits from trading of one-day at-the-money straddles on the hypothetical (artificial) market were used for assessing the relative volatility forecasting accuracy. Our contribution lies in developing a novel approach to the economic valuation of the volatility forecasts - the artificial option market with a single market price – and its comparison with the established approaches. Further on, we compared the relative intra- and inter-group volatility forecasting accuracy of the competing model families. Finally, we measured the economic value of richer information provided by high-frequency data. To preview the results, we show that the economic valuation of volatility forecasts can bring a meaningful and robust ranking. Additionally, we show that this ranking is similar to the ranking implied by established statistical methods. Moreover, it was shown that modelling of volatility directly is strongly dependent on the volatility proxy in place. It was also shown, as a corollary, that the use of high frequency data to predict a future volatility is of considerable economic value.
    Keywords: GARCH, Realized volatility, economic loss function, volatility forecasting
    JEL: C58
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:fau:wpaper:wp2013_09&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.