nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒03‒05
eighteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Inference on Local Average Treatment Effects for Misclassified Treatment By YANAGI, Takahide
  2. A Power Booster Factor for Out-of-Sample Tests of Predictability By Pincheira, Pablo
  3. Empirical likelihood for high frequency data By Lorenzo Camponovo; Yukitoshi Matsushita; Taisuke Otsu
  4. A goodness-of-fit test for Generalized Error Distribution By Daniele Coin
  5. Weighted-Average Least Squares Estimation of Generalized Linear Models By Giuseppe de Luca; Jan Magnus; Franco Peracchi
  6. BAMLSS: Bayesian Additive Models for Location, Scale and Shape (and Beyond) By Nikolaus Umlauf; Nadja Klein; Achim Zeileis
  7. Calculating Joint Bands for Impulse Response Functions using Highest Density Regions By Winker, Peter; Lütkepohl, Helmut; Staszewska-Bystrova, Anna
  8. A panel cointegration rank test with structural breaks and cross-sectional dependence By Karaman Örsal, Deniz Dilan; Arsova, Antonia
  9. Robust and Consistent Estimation of Generators in Credit Risk By Greig Smith; Goncalo dos Reis
  10. Considering the use of random fields in the Modifiable Areal Unit Problem By Michal Bernard Pietrzak; Bartosz Ziemkiewicz
  11. A note on identification in discrete choice models with partial observability By Fosgerau, Mogens; Ranjan, Abhishek
  12. On the Initialization of Adaptive Learning in Macroeconomic Models By Michele Berardi; Jaqueson K Galimberti
  13. Testing the Assumptions of Sequential Bifurcation for Factor Screening (revision of CentER DP 2015-034) By Shi, Wen; Kleijnen, J.P.C.
  14. Methods for strengthening a weak instrument in the case of a persistent treatment By Michel Berthélemy; Petyo Bonev; Damien Dussaux; Magnus Söderberg
  15. Full Information Estimation of Household Income Risk and Consumption Insurance By Arpita Chatterjee; James Morley; Aarti Singh
  16. Modeling euro area bond yields using a time-varying factor model By Adam, Tomáš; Lo Duca, Marco
  17. Saving Behaviour and Biomarkers: A High-Dimensional Bayesian Analysis of British Panel Data By Sarah Brown; Pulak Ghosh; Daniel Gray; Bhuvanesh Pareek; Jennifer Roberts
  18. Towards a pragmatic approach to compositional data analysis By Michael Greenacre

  1. By: YANAGI, Takahide
    Abstract: We develop point-identification and inference methods for the local average treatment effect when the binary treatment contains a measurement error. The standard instrumental variable estimator is inconsistent for the parameter since the measurement error is non-classical by construction. Our proposed analysis corrects the problem by identifying the distribution of the measurement error based on the use of an exogenous variable such as a covariate or instrument. The moment conditions derived from the identification lead to the generalized method of moments estimation with asymptotically valid inferences. Monte Carlo simulations demonstrate the desirable finite sample performance of the proposed procedure.
    Keywords: misclassification, instrumental variable, non-differential measurement error, nonparametric method, causal inference
    JEL: C14 C21 C25
    Date: 2017–02–21
    URL: http://d.repec.org/n?u=RePEc:hit:econdp:2017-02&r=ecm
  2. By: Pincheira, Pablo
    Abstract: In this paper we introduce a “power booster factor” for out-of-sample tests of predictability. The relevant econometric environment is one in which the econometrician wants to compare the population Mean Squared Prediction Errors (MSPE) of two models: one big nesting model, and another smaller nested model. Although our factor can be used to improve the power of many out-of-sample tests of predictability, in this paper we focus on boosting the power of the widely used test developed by Clark and West (2006, 2007). Our new test multiplies the Clark and West t-statistic by a factor that should be close to one under the null hypothesis that the short nested model is the true model, but that should be greater than one under the alternative hypothesis that the big nesting model is more adequate. We use Monte Carlo simulations to explore the size and power of our approach. Our simulations reveal that the new test is well sized and powerful. In particular, it tends to be less undersized and more powerful than the test by Clark and West (2006, 2007). Although most of the gains in power are associated to size improvements, we also obtain gains in size-adjusted power. Finally we present an empirical application in which more rejections of the null hypothesis are obtained with our new test.
    Keywords: Time-series, forecasting, inference, inflation, exchange rates, random walk, out-of-sample
    JEL: C22 C52 C53 C58 E17 E27 E37 E47 F37
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:77027&r=ecm
  3. By: Lorenzo Camponovo; Yukitoshi Matsushita; Taisuke Otsu
    Abstract: With increasing availability of high frequency financial data as a background, various volatility measures and related statistical theory are developed in the recent literature. This paper introduces the method of empirical likelihood to conduct statistical inference on the volatility measures under high frequency data environments. We propose a modified empirical likelihood statistic that is asymptotically pivotal under the infill asymptotics, where the number of high frequency observations in a fixed time interval increases to infinity. Our empirical likelihood approach is extended to be robust to the presence of jumps and microstructure noise. We also provide an empirical likelihood test to detect presence of jumps. Furthermore, we establish Bartlett correction, a higher-order refinement, for a general nonparametric likelihood statistic. Simulation and a real data example illustrate the usefulness of our approach.
    Keywords: High frequency data, Volatility, Empirical likelihood
    JEL: C12 C14 C58
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:591&r=ecm
  4. By: Daniele Coin (Bank of Italy)
    Abstract: The Generalized Error Distribution is a widely used flexible family of symmetric probability distribution. Thanks to its properties, it is becoming more and more popular in many fields of science, and therefore it is important to determine whether a sample is drawn from a GED, usually done using a graphical approach. In this paper we present a new goodness-of-fit test for GED that performs well in detecting non-GED distribution when the alternative distribution is either skewed or a mixture. A comparison between well-known tests and this new procedure is performed through a simulation study. We have developed a function that performs the analysis described in this paper in the R environment. The computational time required to compute this procedure is negligible.
    Keywords: exponential power distribution, kurtosis, normal standardized Q-Q plot
    JEL: C14 C15 C63
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1096_17&r=ecm
  5. By: Giuseppe de Luca (University of Palermo, Italy); Jan Magnus (VU Amsterdam, The Netherlands); Franco Peracchi (University Rome Tor Vergata, Italy)
    Abstract: The weighted-average least squares (WALS) approach, introduced by Magnus et al. (2010) in the context of Gaussian linear models, has been shown to enjoy important advantages over other strictly Bayesian and strictly frequentist model averaging estimators when accounting for problems of uncertainty in the choice of the regressors. In this paper we extend the WALS approach to deal with uncertainty about the specification of the linear predictor in the wider class of generalized linear models (GLMs). We study the large-sample properties of the WALS estimator for GLMs under a local misspecification framework that allows the development of asymptotic model averaging theory. We also investigate the finite sample properties of this estimator by a Monte Carlo experiment whose design is based on the real empirical analysis of attrition in the first two waves of the Survey of Health, Ageing and Retirement in Europe(SHARE).
    Keywords: WALS; model averaging; generalized linear models; Monte Carlo; attrition
    JEL: C51 C25 C13 C11
    Date: 2017–02–27
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20170029&r=ecm
  6. By: Nikolaus Umlauf; Nadja Klein; Achim Zeileis
    Abstract: Bayesian analysis provides a convenient setting for the estimation of complex generalized additive regression models (GAMs). Since computational power has tremendously increased in the past decade it is now possible to tackle complicated inferential problems, e.g., with Markov chain Monte Carlo simulation, on virtually any modern computer. This is one of the reasons why Bayesian methods have become increasingly popular, leading to a number of highly specialized and optimized estimation engines and with attention shifting from conditional mean models to probabilistic distributional models capturing location, scale, shape (and other aspects) of the response distribution. In order to embed many different approaches suggested in literature and software, a unified modeling architecture for distributional GAMs is established that exploits the general structure of these models and encompasses many different response distributions, estimation techniques (posterior mode or posterior mean), and model terms (fixed, random, smooth, spatial, ...). It is shown that within this framework implementing algorithms for complex regression problems, as well as the integration of already existing software, is relatively straightforward. The usefulness is emphasized with two complex and computationally demanding application case studies: a large daily precipitation climatology based on more than 1.2 million observations from more than 50 meteorological stations, as well as a Cox model for continuous time with space-time interactions on a data set with over five thousand "individuals".
    Keywords: GAMLSS, distributional regression, MCMC, BUGS, R, software
    JEL: I18 J22 J38
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:inn:wpaper:2017-05&r=ecm
  7. By: Winker, Peter; Lütkepohl, Helmut; Staszewska-Bystrova, Anna
    Abstract: This paper proposes a new non-parametric method of constructing joint confidence bands for impulse response functions of vector autoregressive models. The estimation uncertainty is captured by means of bootstrapping and the highest density region (HDR) approach is used to construct the bands. A Monte Carlo comparison of the HDR bands with existing alternatives shows that the former are competitive with the bootstrap-based Bonferroni and Wald confidence regions. The relative tightness of the HDR bands matched with their good coverage properties makes them attractive for applications. An application to corporate bond spreads for Germany highlights the potential for empirical work.
    JEL: C32 C53 C58
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:zbw:vfsc16:145537&r=ecm
  8. By: Karaman Örsal, Deniz Dilan; Arsova, Antonia
    Abstract: This paper proposes a new likelihood-based panel cointegration rank test which allows for a linear time trend with heterogeneous breaks and cross sectional dependence. It is based on a novel modification of the inverse normal method which combines the p-values of the individual likelihood-ratio trace statistics of Trenkler et al. (2007). We call this new test a correlation augmented inverse normal (CAIN) test. It infers the unknown correlation between the probits of the individual p-values from an estimate of the average absolute correlation between the VAR processes' innovations, which is readily observable in practice. A Monte Carlo study demonstrates that this simple test is robust to various degrees of cross-sectional dependence generated by common factors. It has better size and power properties than other meta-analytic tests in panels with dimensions typically encountered in macroeconometric analysis.
    JEL: C12 C15 C33
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:zbw:vfsc16:145822&r=ecm
  9. By: Greig Smith; Goncalo dos Reis
    Abstract: Bond rating Transition Probability Matrices (TPMs) are built over a one-year time-frame and for many practical purposes, like the assessment of risk in portfolios, one needs to compute the TPM for a smaller time interval. In the context of continuous time Markov chains (CTMC) several deterministic and statistical algorithms have been proposed to estimate the generator matrix. We focus on the Expectation-Maximization (EM) algorithm by \cite{BladtSorensen2005} for a CTMC with an absorbing state for such estimation. This work's contribution is fourfold. Firstly, we provide directly computable closed form expressions for quantities appearing in the EM algorithm. Previously, these quantities had to be estimated numerically and considerable computational speedups have been gained. Secondly, we prove convergence to a single set of parameters under reasonable conditions. Thirdly, we derive a closed-form expression for the error estimate in the EM algorithm allowing to approximate confidence intervals for the estimation. Finally, we provide a numerical benchmark of our results against other known algorithms, in particular, on several problems related to credit risk. The EM algorithm we propose, padded with the new formulas (and error criteria), is very competitive and outperforms other known algorithms in several metrics.
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1702.08867&r=ecm
  10. By: Michal Bernard Pietrzak (Nicolaus Copernicus University, Poland); Bartosz Ziemkiewicz (Nicolaus Copernicus University, Poland)
    Abstract: The focus of the research will be on the modifiable areal unit problem (MAUP) within which two aspects will be considered: the scale problem and the aggregation problem. In the article we consider the use of random fields theory for the needs of the “Scale Problem” issue. The Scale Problem is defined as a volatility of the results of analysis as a result of a change in the aggregation scale. In the case of the scale problem empirical studies should be conducted with application of simulations. Within the simulation analysis the realisations of random fields referred to irregular regions will be generated. First, the internal structure of spatial processes will be analysed. Next, we consider the theoretical foundations for random fields relative to irregular regions. The accepted properties of random fields will be based on the characteristics established for economic phenomena. The outcome of the task will be the development of a procedure for generating the vector of random fields with specified properties. Procedure for generating random fields will be used to simulations within the scale problem too. The research is funded by National Science Centre, Poland under the research project no. 2015/17/B/HS4/01004.
    Keywords: spatial econometrics, Scale Problem, random fields, Modifiable Areal Unit Problem, simulations
    JEL: C10 C15 C21
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:pes:wpaper:2016:no40&r=ecm
  11. By: Fosgerau, Mogens; Ranjan, Abhishek
    Abstract: This note establishes a new identification result for additive random utility discrete choice models (ARUM). A decision-maker associates a random utility U_{j}+m_{j} to each alternative in a finite set j∈{1,...,J}, where U={U₁,...,U_{J}} is unobserved by the researcher and random with an unknown joint distribution, while the perturbation m=(m₁,...,m_{J}) is observed. The decision-maker chooses the alternative that yields the maximum random utility, which leads to a choice probability system m→(Pr(1|m),...,Pr(J|m)). Previous research has shown that the choice probability system is identified from the observation of the relationship m→Pr(1|m). We show that the complete choice probability system is identified from observation of a relationship m→∑_{j=1}^{s}Pr(j|m), for any s
    Keywords: ARUM; random utility discrete choice; identification
    JEL: C25 D11
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:76800&r=ecm
  12. By: Michele Berardi (KOF Swiss Economic Institute, ETH Zurich, Switzerland); Jaqueson K Galimberti (KOF Swiss Economic Institute, ETH Zurich, Switzerland)
    Abstract: We review and evaluate methods previously adopted in the applied literature of adaptive learning in order to initialize agents’ beliefs. Previous methods are classified into three broad classes: equilibrium-related, training sample-based, and estimation-based. We conduct several simulations comparing the accuracy of the initial estimates provided by these methods and how they affect the accuracy of other estimated model parameters. We find evidence against their joint estimation with standard moment conditions: as the accuracy of estimated initials tends to deteriorate with the sample size, spillover effects also deteriorate the accuracy of the estimates of the model’s structural parameters. We show how this problem can be attenuated by penalizing the variance of estimation errors. Even so, the joint estimation of learning initials with other model parameters is still subject to severe distortions in small samples. We find that equilibrium-related and training sample-based initials are less prone to these issues. We also demonstrate the empirical relevance of our results by estimating a New Keynesian Phillips curve with learning, where we find that our estimation approach provides robustness to the initialization of learning. That allows us to conclude that under adaptive learning the degree of price stickiness is lower compared to inferences under rational expectations, whereas the fraction of backward looking price setters increases.
    Keywords: Expectations, Adaptive learning, Initialization, Algorithms, Hybrid New Keynesian Phillips curve
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:kof:wpskof:16-422&r=ecm
  13. By: Shi, Wen; Kleijnen, J.P.C. (Tilburg University, Center For Economic Research)
    Abstract: Sequential bifurcation (or SB) is an efficient and effective factor-screening method; i.e., SB quickly identifies the important factors (inputs) in experiments with simulation models that have very many factors—provided the SB assumptions are valid. The specific SB assumptions are: (i) a secondorder polynomial is an adequate approximation (a valid metamodel) of the implicit input/output function of the underlying simulation model; (ii) the directions (signs) of the first-order effects are known (so the first-order polynomial approximation is monotonic); (iii) so-called “heredity” applies; i.e., if an input has no important first-order effect, then this input has no important second-order effects. Moreover—like many other statistical methods—SB assumes Gaussian simulation outputs if the simulation model is stochastic (random). A generalization of SB called “multiresponse SB” (or MSB) uses the same assumptions, but allows for simulation models with multiple types of responses (outputs). To test whether these assumptions hold, we develop new methods. We evaluate these methods through Monte Carlo experiments and a case study.
    Keywords: sensitivity analysis; experimental desgin; meta-modeling; validation; regression; simulation
    JEL: C0 C1 C9 C15 C44
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:tiu:tiucen:763fd6f8-b618-4b06-a284-5a1b27840044&r=ecm
  14. By: Michel Berthélemy; Petyo Bonev; Damien Dussaux; Magnus Söderberg
    Abstract: When evaluating policy treatments that are persistent and endogenous, available instrumental variables often exhibit more variation over time than the treatment variable. This leads to a weak instrumental variable problem, resulting in uninformative confidence intervals. The authors of this paper propose two new estimation approaches that strengthen the instrument. They derive their theoretical properties and show in Monte Carlo simulations that they outperform standard IV-estimators. The authors use these procedures to estimate the effect of public utility divestiture in the US nuclear energy sector. Their results show that divestiture significantly increases production efficiency.
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:lsg:lsgwps:wp265&r=ecm
  15. By: Arpita Chatterjee; James Morley; Aarti Singh
    Abstract: We develop a panel unobserved components model of household income and consumption that can be estimated using full information methods. Maximum likelihood estimates for a simple version of this model suggests similar income risk, but higher consumption insurance relative to the partial information moments-based estimates of Blundell, Pistaferri, and Preston (2008) when using the same panel dataset. Bayesian model comparison supports this simple version of the model that only allows a spillover from permanent income to permanent consumption, but assumes no cointegration and no persistence in transitory components. At the same time, consumption insurance and income risk estimates are highly robust across different specifications.
    Keywords: panel unobserved components; Bayesian model comparison; permanent income; household consumption behavior
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:syd:wpaper:2017-04&r=ecm
  16. By: Adam, Tomáš; Lo Duca, Marco
    Abstract: In this paper, we study the dynamics and drivers of sovereign bond yields in euro area countries using a factor model with time-varying loading coefficients and stochastic volatility, which allows for capturing changes in the pricing mechanism of bond yields. Our key contribution is exploring both the global and the local dimensions of bond yield determinants in individual euro area countries using a time-varying model. Using the reduced form results, we show decoupling of periphery euro area bond yields from the core countries yields following the financial crisis and the scope of their subsequent re-integration. In addition, by means of the structural analysis based on identification via sign restrictions, we present time varying impulse responses of bond yields to EA and US monetary policy shocks and to confidence shocks. JEL Classification: C11, G01, E58
    Keywords: bayesian estimation, bond yield, factor model, sovereign debt crisis, stochastic volatility
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20172012&r=ecm
  17. By: Sarah Brown (Department of Economics, University of Sheffield); Pulak Ghosh (Indian Institute of Management (IIMB), Bangalore, India); Daniel Gray (Department of Economics, University of Sheffield); Bhuvanesh Pareek (Indian Institute of Management (IIM), Indore, India); Jennifer Roberts (Department of Economics, University of Sheffield)
    Abstract: Using British panel data, we explore the relationship between saving behaviour and health,as measured by an extensive range of biomarkers, which are rarely available in large nationallyrepresentative surveys. The effects of these objective measures of health are compared withcommonly used self-assessed health measures. We develop a semi-continuous high-dimensionalBayesian modelling approach, which allows different data-generating processes for the decision tosave and the amount saved. We find that composite biomarker measures of health, as well asindividual biomarkers, are significant determinants of saving. Our results suggest that objectivebiomarker measures of health have differential impacts on saving behaviour compared to selfreported health measures, suggesting that objective health measures can further our understanding ofthe effect of health on financial behaviours.
    Keywords: bayesian modelling; biomarkers; household finances; saving; two-part model.
    JEL: D12 D14
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:shf:wpaper:2017005&r=ecm
  18. By: Michael Greenacre
    Abstract: Compositional data are nonnegative data with the property of closure: that is, each set of values on their components, or so-called parts, has a fixed sum, usually 1 or 100%. The approach to compositional data analysis originated by John Aitchison uses ratios of parts as the fundamental starting point for description and modeling. I show that a compositional data set can be effectively replaced by a set of ratios, one less than the number of parts, and that these ratios describe an acyclic connected graph of all the parts. Contrary to recent literature, I show that the additive log-ratio transformation can be an excellent substitute for the original data set, as shown in an archaeological data set as well as in three other examples. I propose further that a smaller set of ratios of parts can be determined, either by expert choice or by automatic selection, which explains as much variance as required for all practical purposes. These part ratios can then be validly summarized and analyzed by conventional univariate methods, as well as multivariate methods, where the ratios are preferably log-transformed.
    Keywords: compositional data, log-ratio transformation, log-ratio analysis, log-ratio distance, multivariate analysis, ratios, subcompositional coherence, univariate statistics.
    JEL: Z32 C19 C38 C55
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1554&r=ecm

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.