nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒03‒23
eleven papers chosen by
Sune Karlsson
Örebro universitet

  1. Moment restrictions and identification in linear dynamic panel data models By Tue Gorgens; Chirok Han; Sen Xue
  2. Estimation of counterfactual distributions with a continuous endogenous treatment By Santiago Pereda Fernández
  3. VAR Models with Non-Gaussian Shocks By Ching-Wai Chiu; Haroon Mumtaz; Gabor Pinter
  4. D-trace Precision Matrix Estimation Using Adaptive Lasso Penalties By Avagyan, Vahe; Nogales, Francisco J.; Alonso, Andrés M.
  5. On tail dependence coefficients of transformed multivariate Archimedean copulas By Elena Di Bernardino; Didier Rullière
  6. Functional outlier detection with a local spatial depth By Lillo, Rosa E.; Galeano, Pedro; Sguera, Carlo
  7. Using Split Samples to Improve Inference about Causal Effects By Marcel Fafchamps; Julien Labonne
  8. Solution and Estimation Methods for DSGE Models By Jesús Fernández-Villaverde; Juan F. Rubio Ramírez; Frank Schorfheide
  9. Testing for a Common Volatility Process and Information Spillovers in Bivariate Financial Time Series Models By Chen, J.; Kobayashi, M.; McAleer, M.J.
  10. ON THE STABILITY OF THE EXCESS SENSITIVITY OF AGGREGATE CONSUMPTION GROWTH IN THE US By Gerdie Everaert; Lorenzo Pozzi; Ruben Schoonackers
  11. Attenuation bias when measuring inventive performance By Zwick, Thomas; Frosch, Katharina

  1. By: Tue Gorgens; Chirok Han; Sen Xue
    Abstract: This paper investigates the relationship between moment restrictions and identification in simple linear AR(1) dynamic panel data models with fixed effects under standard minimal assumptions. The number of time periods is assumed to be small. The assumptions imply linear and quadratic moment restrictions which can be used for GMM estimation. The paper makes three points. First, contrary to common belief, the linear moment restrictions may fail to identify the autoregressive parameter even when it is known to be less than 1. Second, the quadratic moment restrictions provide full or partial identification in many of the cases where the linear moment restrictions do not. Third, the first moment restrictions can also be important for identification. Practical implications of the findings are illustrated using Monte Carlo simulations.
    Keywords: Dynamic panel data models, fixed effects, identification, generalized method of moments, Arellano-Bond estimator.
    JEL: C23
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:acb:cbeeco:2016-633&r=ecm
  2. By: Santiago Pereda Fernández (Bank of Italy)
    Abstract: Policy makers are often interested in the distributional effects that a policy would have. In this paper I propose a method to estimate such effects when the treatment variable is endogenous, continuous, and has a heterogeneous effect. I consider a triangular system of equations in which the unobservables are related by a copula that captures the endogeneity of the model. The copula is nonparametrically identified by inverting the quantile processes conditional on a vector of covariates. I estimate both conditional quantile processes using existing quantile regression methods, and propose a parametric and a nonparametric estimator of the copula, showing the asymptotic properties of the estimators. I consider three kinds of counterfactual experiments: changing the distribution of the treatment, changing the distribution of the instrument, and changing the determination of the treatment, discussing the estimation for each counterfactual. I illustrate these methods by estimating several counterfactuals that affect the distribution of the share of food consumption.
    Keywords: copula, counterfactual distribution, endogeneity, policy analysis, quantile regression, unconditional distributional effects
    JEL: C31 C36
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1053_16&r=ecm
  3. By: Ching-Wai Chiu (Bank of England); Haroon Mumtaz (School of Economics and Finance Queen Mary); Gabor Pinter (Bank of England; Centre for Macroeconomics (CFM))
    Abstract: We introduce a Bayesian VAR model with non-Gaussian disturbances that are modelled with a finite mixture of normal distributions. Importantly, we allow for regime switching among the different components of the mixture of normals. Our model is highly flexible and can capture distributions that are fat-tailed, skewed and even multimodal. We show that our model can generate large out-of-sample forecast gains relative to standard forecasting models, especially during tranquil periods. Our model forecasts are also competitive with those generated by the conventional VAR model with stochastic volatility.
    JEL: C11 C32 C52
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:cfm:wpaper:1609&r=ecm
  4. By: Avagyan, Vahe; Nogales, Francisco J.; Alonso, Andrés M.
    Abstract: An accurate estimation of a precision matrix has a crucial role in the current age of high-dimensional data explosion. To deal with this problem, one of the prominent and commonly used techniques is the l1 norm (Lasso) penalization for a given loss function. This approach guarantees the sparsity of the precision matrix estimator for properly selected penalty parameters. However, the l1 norm penalization often fails to control the bias of the obtained estimator because of its overestimation behavior. In this paper, we introduce two adaptive extensions of the recently proposed l1 norm penalized D-trace loss minimization method. The proposed approaches intend to diminish the produced bias in the estimator. Extensive numerical results, using both simulated and real datasets, show the advantage of our proposed estimators.
    Keywords: High-dimensionality; Gene expression data; Gaussian Graphical Model; D-trace loss; Adaptive lasso
    Date: 2015–10–01
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:21775&r=ecm
  5. By: Elena Di Bernardino (CEDRIC - Centre d'Etude et De Recherche en Informatique du Cnam - Conservatoire National des Arts et Métiers [CNAM]); Didier Rullière (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1)
    Abstract: This paper presents the impact of a class of transformations of copulas in their upper and lower multivariate tail dependence coefficients. In particular we focus on multivariate Archimedean copulas. In the first part of this paper, we calculate multivariate tail dependence coefficients when the generator of the considered copula exhibits some regular variation properties, and we investigate the behaviour of these coefficients in cases that are close to tail independence. This first part exploits previous works of Charpentier and Segers (2009) and extends some results of Juri and Wüthrich (2003) and De Luca and Rivieccio (2012). We also introduce a new Regular Index Function (RIF) exhibiting some interesting properties. In the second part of the paper we analyse the impact in the upper and lower multivariate tail dependence coefficients of a large class of transformations of dependence structures. These results are based on the transformations exploited by Di Bernardino and Rullière (2013). We extend some bivariate results of Durante et al. (2010) in a multivariate setting by calculating multivariate tail dependence coefficients for transformed copulas. We obtain new results under specific conditions involving regularly varying hazard rates of components of the transformation. In the third part, we show the utility of using transformed Archimedean copulas, as they permit to build Archimedean generators exhibiting any chosen couple of lower and upper tail dependence coefficients. The interest of such study is also illustrated through applications in bivariate settings. At last, we explain possible applications with Markov chains with specific dependence structure.
    Keywords: regular variation,tail dependence coefficients,Archimedean copulas,transformations of Archimedean copulas.
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-00992707&r=ecm
  6. By: Lillo, Rosa E.; Galeano, Pedro; Sguera, Carlo
    Abstract: This paper proposes methods to detect outliers in functional datasets. We are interested in challenging scenarios where functional samples are contaminated by outliers that may be difficult to recognize. The task of identifying a typical curves is carried out using the recently proposed kernelized functional spatial depth (KFSD). KFSD is a localdepth that can be used to order the curves of a sample from the most to the least central. Since outliers are usually among the least central curves, we introduce three new procedures that provide a threshold value for KFSD such that curves with depth values lower than the threshold are detected as outliers. The results of a simulation study show that our proposals generally out perform a battery of competitors. Finally, we consider areal application with environmental data consisting in levels of nitrogen oxides
    Keywords: Smoothed resampling; Nitrogen oxides; Kernelized functional spatial depth; Functional outlier detection; Functional depths
    Date: 2014–06
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws141410&r=ecm
  7. By: Marcel Fafchamps; Julien Labonne
    Abstract: We discuss a method aimed at reducing the risk that spurious results are published. Researchers send their datasets to an independent third party who randomly generates training and testing samples. Researchers perform their analysis on the former and once the paper is accepted for publication the method is applied to the latter and it is those results that are published. Simulations indicate that, under empirically relevant settings, the proposed method significantly reduces type I error and delivers adequate power. The method – that can be combined with pre-analysis plans – reduces the risk that relevant hypotheses are left untested.
    JEL: C12 C18
    Date: 2016–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:21842&r=ecm
  8. By: Jesús Fernández-Villaverde; Juan F. Rubio Ramírez; Frank Schorfheide
    Abstract: This paper provides an overview of solution and estimation techniques for dynamic stochastic general equilibrium (DSGE) models. We cover the foundations of numerical approximation techniques as well as statistical inference and survey the latest developments in the field.
    JEL: C11 C13 C32 C52 C61 C63 E32 E52
    Date: 2016–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:21862&r=ecm
  9. By: Chen, J.; Kobayashi, M.; McAleer, M.J.
    Abstract: The paper considers the problem as to whether financial returns have a common volatility process in the framework of stochastic volatility models that were suggested by Harvey et al. (1994). We propose a stochastic volatility version of the ARCH test proposed by Engle and Susmel (1993), who investigated whether international equity markets have a common volatility process. The paper also checks the hypothesis of frictionless cross-market hedging, which implies perfectly correlated volatility changes, as suggested by Fleming et al. (1998). The paper uses the technique of Chesher (1984) in differentiating an integral that contains a degenerate density function in deriving the Lagrange Multiplier test statistic.
    Keywords: Volatility comovement, Cross-market hedging, Spillovers, Contagion
    JEL: C12 C58 G01 G11
    Date: 2016–02–29
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:79925&r=ecm
  10. By: Gerdie Everaert; Lorenzo Pozzi; Ruben Schoonackers (-)
    Abstract: This paper investigates the degree of time variation in the excess sensitivity of aggregate con- sumption growth to anticipated aggregate disposable income growth using quarterly US data over the period 1953-2014. Our empirical framework contains the possibility of stickiness in aggregate consumption growth and takes into account measurement error and time aggregation. Our empirical specification is cast into a Bayesian state space model and estimated using Markov Chain Monte Carlo (MCMC) methods. We use a Bayesian model selection approach to deal with the non-regular test for the null hypothesis of no time variation in the excess sensitivity parameter. Anticipated disposable income growth is calculated by incorporating an instrumental variables estimation approach into our MCMC algorithm. Our results suggest that the excess sensitivity parameter in the US is stable at around 0.24 over the entire sample period.
    Keywords: Excess sensitivity, time-variation, consumption, income, MCMC, Bayesian model selection
    JEL: E21 C11 C22 C26
    Date: 2016–01
    URL: http://d.repec.org/n?u=RePEc:rug:rugwps:16/917&r=ecm
  11. By: Zwick, Thomas; Frosch, Katharina
    Abstract: Most previous results on determinants of inventive performance are biased because inventive performance is measured with error. This measurement error causes attenuation bias. More specifically, for example age and education as drivers of patenting success have biased coefficients and too high standard errors when inventive performance is measured in short observation periods. The reason for measurement errors in inventive performance is that patents are typically applied for in waves.
    Keywords: Measurement error,inventive performance,observation period
    JEL: C33 C52 O31
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:zbw:zewdip:16014&r=ecm

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.