nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒02‒04
seventeen papers chosen by
Sune Karlsson
Örebro universitet

  1. Simple methods for consistent estimation of dynamic panel data sample selection models By Majid M. Al-Sadoon; Sergi Jiménez-Martín; Jose M. Labeaga
  2. Estimation of a Multiplicative Correlation Structure in the Large Dimensional Case By Hafner, C.; Linton, O.; Tang, H.
  3. Bootstrap Procedures for Detecting Multiple Persistance4 Shifts in a heteroskedastic Time Series By Mohitosh Kejriwal; Xuewen Yu
  4. Robust Measures of Microstructure Noise By Merrick Li, Z.; Linton, O.
  5. Evaluating time-varying treatment effects in latent Markov models: An application to the effect of remittances on poverty dynamics By Tullio, Federico; Bartolucci, Francesco
  6. Variational Bayesian inference in large Vector Autoregressions with hierarchical shrinkage By Deborah Gefang; Gary Koop; Aubrey Poon
  7. Nonparametric estimation of infinite order regression and its application to the risk-return tradeoff By Hong, S-Y.; Linton, O.
  8. Efficient Estimation of Nonparametric Regression in The Presence of Dynamic Heteroskedasticit By Linton, O.; Xiao, Z.
  9. Orthogonal Statistical Learning By Dylan J. Foster; Vasilis Syrgkanis
  10. The Lower Regression Function and Testing Expectation Dependence Dominance Hypotheses By Linton, O.; Whang, Y-J.; Yen, Y.
  11. Temporal Logistic Neural Bag-of-Features for Financial Time series Forecasting leveraging Limit Order Book Data By Nikolaos Passalis; Anastasios Tefas; Juho Kanniainen; Moncef Gabbouj; Alexandros Iosifidis
  12. Taming the Factor Zoo: A Test of New Factors By Guanhao Feng; Stefano Giglio; Dacheng Xiu
  13. Will the real eigensystem VAR please stand up? A univariate primer By Leo Krippner
  14. Average Gaps and Oaxaca–Blinder Decompositions: A Cautionary Tale about Regression Estimates of Racial Differences in Labor Market Outcomes By Sloczynski, Tymon
  15. Keynesian Models, Detrending, and the Method of Moments By MAO TAKONGMO, Charles Olivier
  16. Modeling temporal treatment effects with zero inflated semi-parametric regression models: the case of local development policies in France By Hervé Cardot; Antonio Musolesi
  17. A new causality test for the analysis of the export-growth nexus By Furuoka, Fumitaka

  1. By: Majid M. Al-Sadoon; Sergi Jiménez-Martín; Jose M. Labeaga
    Abstract: We analyse the properties of generalised method of moments-instrumental variables (GMM-IV) estimators of AR(1) dynamic panel data sample selection models. We show the consistency of the first-differenced GMM-IV estimator uncorrected for sample selection of Arellano and Bond (1991) (a property also shared by the Anderson and Hsiao,1982, proposal). Alternatively, the system GMM-IV estimator (Arellano and Bover, 1995, and Blundell and Bond, 1998) shows a moderate bias. We perform a Monte Carlo study to evaluate the finite sample properties of the proposed estimators. Our results confirm the absence of bias of the Arellano and Bond estimator under a variety of circumstances, as well as the small bias of the system estimator, mostly due to the correlation between the individual heterogeneity components in both the outcome and selection equations. However, we must not discard the system estimator because, in small samples, its performance is similar to or even better than that of the Arellano-Bond. These results hold in dynamic models with exogenous, predetermined or endogenous covariates. They are especially relevant for practitioners using unbalanced panels when either there is selection of unknown form or when selection is difficult to model.
    Keywords: Panel data, sample selection, dynamic model, generalized method of moments
    JEL: J52 C23 C24
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1631&r=all
  2. By: Hafner, C.; Linton, O.; Tang, H.
    Abstract: We propose a Kronecker product model for correlation or covariance matrices in the large dimension case. The number of parameters of the model increases logarithmically with the dimension of the matrix. We propose a minimum distance (MD) estimator based on a log-linear property of the model, as well as a one-step estimator, which is a one-step approximation to the quasi-maximum likelihood estimator (QMLE).We establish the rate of convergence and a central limit theorem (CLT) for our estimators in the large dimensional case. A specification test and tools for Kronecker product model selection and inference are provided. In an Monte Carlo study where a Kronecker product model is correctly specified, our estimators exhibit superior performance. In an empirical application to portfolio choice for S&P500 daily returns, we demonstrate that certain Kronecker product models are good approximations to the general covariance matrix.
    Keywords: Correlation matrix, Kronecker product, Matrix logarithm, Multiway, array data, Portfolio choice, Sparsity
    JEL: C58 G11
    Date: 2018–09–28
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1878&r=all
  3. By: Mohitosh Kejriwal; Xuewen Yu
    Abstract: This paper proposes new bootstrap procedures for detecting multiple persistence shifts in a time series driven by nonstationary volatility. The assumed volatility process can accommodate discrete breaks, smooth transition variation as well as trending volatility. We develop wild bootstrap sup-Wald tests of the null hypothesis that the process is either stationary [I(0)] or has a unit root [I(1)] throughout the sample. We also propose a sequential procedure to estimate the number of persistence breaks based on ordering the regime-specific bootstrap p-values. The asymptotic validity of the advocated procedures is established both under the null of stability and a variety of persistence change alternatives. Monte Carlo simulations support the use of a non-recursive scheme for generating the I(0) bootstrap samples and a partially recursive scheme for generating the I(1) bootstrap samples, especially when the data generating process contains an I(1) segment. A comparison with existing tests illustrates the finite sample improvements offered by our methods in terms of both size and power. An application to OECD inflation rates is included.
    Keywords: heteroskedasticity, multiple structural changes,
    JEL: C22
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:pur:prukra:1308&r=all
  4. By: Merrick Li, Z.; Linton, O.
    Abstract: We introduce a new nonparametric method to measure microstructure noise, the deviation of the observed asset prices from the fundamental values caused by market imperfections. Using high-frequency data, we provide joint estimators of arbitrary finite moments of microstructure noise, which could be serially dependent and nonstationary. We characterize the limit distributions of the proposed estimators and construct robust confidence intervals under infill asymptotics. We further demonstrate a consistency property of our new estimators without any specification on the data frequencies. As an economic application, we propose two liquidity measures that gauge the instantaneous and average bid-ask spread with potentially autocorrelated order flows, and such measures can be interpreted as an intermediary’s inventory risks to meet liquidity demand. Statistical applications include several model-free tests for the intraday patterns and the zero autocorrelations hypotheses of microstructure noise.
    Date: 2019–01–13
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1908&r=all
  5. By: Tullio, Federico; Bartolucci, Francesco
    Abstract: To assess the effectiveness of remittances on the poverty level of recipient households, we propose a causal inference approach that may be applied with longitudinal data and time-varying treatments. The method relies on the integration of a propensity score based technique, the inverse propensity weighting, with a general Latent Markov (LM) framework. It is particularly useful when the interest is in an individual characteristic that is not directly observable and the analysis is focused on: (i) clustering individuals in a finite number of classes according to this latent characteristic and (ii) modelling its evolution across time depending on the received treatment. Parameter estimation is based on a two-step procedure in which individual weights are computed for each time period based on predetermined covariates and a weighted version of the standard LM model likelihood based on such weights is maximised by means of an expectation-maximisation algorithm. Finite-sample properties of the estimator are studied by simulation. The application is focused on the effect of remittances on the poverty status of Ugandan households, based on a longitudinal survey spanning the period 2009-2014 and where response variables are indicators of deprivation.
    Keywords: Causal inference; Expectation-maximisation algorithm; Potential outcomes; Weighted Maximum Likelihood
    JEL: C33 I32
    Date: 2019–01–14
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:91459&r=all
  6. By: Deborah Gefang; Gary Koop; Aubrey Poon
    Abstract: Many recent papers in macroeconomics have used large Vector Autoregressions (VARs) involving a hundred or more dependent variables. With so many parameters to estimate, Bayesian prior shrinkage is vital in achieving reasonable results. Computational concerns currently limit the range of priors used and render difficult the addition of empirically important features such as stochastic volatility to the large VAR. In this paper, we develop variational Bayes methods for large VARs which overcome the computational hurdle and allow for Bayesian inference in large VARs with a range of hierarchical shrinkage priors and with time-varying volatilities. We demonstrate the computational feasibility and good forecast performance of our methods in an empirical application involving a large quarterly US macroeconomic data set.
    Keywords: Variational inference, Vector Autoregression, Stochastic Volatility, Hierarchical Prior, Forecasting
    JEL: C11 C32 C53
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2019-08&r=all
  7. By: Hong, S-Y.; Linton, O.
    Abstract: This paper studies nonparametric estimation of the infinite order regression [see code in paper] with stationary and weakly dependent data. We propose a Nadaraya-Watson type estimator that operates with an infinite number of conditioning variables. The established theories are applied to examine the intertemporal risk-return relation for the aggregate stock market, and some new empirical evidence is reported. With a bandwidth sequence that shrinks the effects of long lags, the influence of all conditioning information is modelled in a natural and flexible way, and the issues of omitted information bias and specification error are effectively handled. Asymptotic properties of the estimator are shown under a wide range of static and dynamic regressions frameworks, thereby allowing various kinds of conditioning variables to be used. We establish pointwise/uniform consistency and CLTs. It is shown that the convergence rates are at best logarithmic, and depend on the smoothness of the regression, the distribution of the marginal regressors and their dependence structure in a non-trivial way via the Lambert W function. The empirical studies on S&P 500 daily data from 1950-2016 using our estimator report an overall positive risk-return relation. We also .find evidence of strong time variation and counter- cyclical behaviour in risk aversion. These conclusions are attributable to the inclusion of otherwise neglected information in our method.
    JEL: C10 C58 G10
    Date: 2018–06–05
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1877&r=all
  8. By: Linton, O.; Xiao, Z.
    Abstract: We study the efficient estimation of nonparametric regression in the presence of heteroskedasticity. We focus our analysis on local polynomial estimation of nonparametric regressions with conditional heteroskedasticity in a time series setting. We introduce a weighted local polynomial regression smoother that takes account of the dynamic heteroskedasticity. We show that, although traditionally it is adviced that one should not weight for heteroskedasticity in nonparametric regressions, in many popular nonparametric regression models our method has lower asymptotic variance than the usual unweighted procedures. We conduct a Monte Carlo investigation that confirms the efficiency gain over conventional nonparametric regression estimators infinite samples.
    Keywords: Efficiency; Heteroskedasticity; Local Polynomial Estimation; Nonparametric Regression.
    JEL: C13 C14
    Date: 2019–01–15
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1907&r=all
  9. By: Dylan J. Foster; Vasilis Syrgkanis
    Abstract: We provide excess risk guarantees for statistical learning in the presence of an unknown nuisance component. We analyze a two-stage sample splitting meta-algorithm that takes as input two arbitrary estimation algorithms: one for the target model and one for the nuisance model. We show that if the population risk satisfies a condition called Neyman orthogonality, the impact of the first stage error on the excess risk bound achieved by the meta-algorithm is of second order. Our general theorem is agnostic to the particular algorithms used for the target and nuisance and only makes an assumption on their individual performance. This enables the use of a plethora of existing results from statistical learning and machine learning literature to give new guarantees for learning with a nuisance component. Moreover, by focusing on excess risk rather than parameter estimation, we can give guarantees under weaker assumptions than in previous works and accommodate the case where the target parameter belongs to a complex nonparametric class. When the nuisance and target parameters belong to arbitrary classes, we characterize conditions on the metric entropy such that oracle rates---rates of the same order as if we knew the nuisance model---are achieved. We also analyze the rates achieved by specific estimation algorithms such as variance-penalized empirical risk minimization, neural network estimation and sparse high-dimensional linear model estimation. We highlight the applicability of our results via four applications of primary importance: 1) heterogeneous treatment effect estimation, 2) offline policy optimization, 3) domain adaptation, and 4) learning with missing data.
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1901.09036&r=all
  10. By: Linton, O.; Whang, Y-J.; Yen, Y.
    Abstract: We provide an estimator of the lower regression function and provide large sample properties for inference. We also propose a test of the hypothesis of positive expectation dependence and derive its limiting distribution under the null hypothesis and provide consistent critical values. We apply our methodology to several empirical questions.
    Date: 2018–10–23
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1880&r=all
  11. By: Nikolaos Passalis; Anastasios Tefas; Juho Kanniainen; Moncef Gabbouj; Alexandros Iosifidis
    Abstract: Time series forecasting is a crucial component of many important applications, ranging from forecasting the stock markets to energy load prediction. The high-dimensionality, velocity and variety of the data collected in these applications pose significant and unique challenges that must be carefully addressed for each of them. In this work, a novel Temporal Logistic Neural Bag-of-Features approach, that can be used to tackle these challenges, is proposed. The proposed method can be effectively combined with deep neural networks, leading to powerful deep learning models for time series analysis. However, combining existing BoF formulations with deep feature extractors pose significant challenges: the distribution of the input features is not stationary, tuning the hyper-parameters of the model can be especially difficult and the normalizations involved in the BoF model can cause significant instabilities during the training process. The proposed method is capable of overcoming these limitations by a employing a novel adaptive scaling mechanism and replacing the classical Gaussian-based density estimation involved in the regular BoF model with a logistic kernel. The effectiveness of the proposed approach is demonstrated using extensive experiments on a large-scale financial time series dataset that consists of more than 4 million limit orders.
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1901.08280&r=all
  12. By: Guanhao Feng; Stefano Giglio; Dacheng Xiu
    Abstract: We propose a model-selection method to systematically evaluate the contribution to asset pricing of any new factor, above and beyond what a high-dimensional set of existing factors explains. Our methodology explicitly accounts for potential model-selection mistakes, unlike the standard approaches that assume perfect variable selection, which rarely occurs in practice and produces a bias due to the omitted variables. We apply our procedure to a set of factors recently discovered in the literature. While most of these new factors are found to be redundant relative to the existing factors, a few — such as profitability — have statistically significant explanatory power beyond the hundreds of factors proposed in the past. In addition, we show that our estimates and their significance are stable, whereas the model selected by simple LASSO is not.
    JEL: C01 C12 C23 C52 C58 G00 G1 G10 G12
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:25481&r=all
  13. By: Leo Krippner
    Abstract: I introduce the essential aspects of the eigensystem vector autoregression (EVAR), which allows VARs to be specified and estimated directly in terms of their eigensystem, using univariate examples for clarity. The EVAR guarantees non-explosive dynamics and, if included, non-redundant moving-average components. In the empirical application, constraining the EVAR eigenvalues to be real and positive leads to “desirable” impulse response functions and improved out-of-sample forecasts.
    Keywords: vector autoregression, moaving average, lag polynomial
    JEL: C22 C32 C53
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2019-01&r=all
  14. By: Sloczynski, Tymon (Brandeis University)
    Abstract: In this paper I demonstrate, both theoretically and empirically, that the interpretation of regression estimates of between-group differences in economic outcomes depends on the relative sizes of subpopulations under study. When the disadvantaged group is small, regression estimates are similar to its average loss. When this group is instead a numerical majority, regression estimates are similar to the average gain for advantaged individuals. I analyze black–white test score gaps using ECLS-K data and black–white wage gaps using CPS, NLSY79, and NSW data, documenting that the interpretation of regression estimates varies dramatically across applications. Methodologically, I also develop a new version of the Oaxaca–Blinder decomposition whose unexplained component recovers a parameter referred to as the average outcome gap. Under a particular conditional independence assumption, this estimand is equivalent to the average treatment effect (ATE). Finally, I provide treatment-effects reinterpretations of the Reimers, Cotton, and Fortin decompositions.
    Keywords: black-white gaps, decomposition methods, test scores, treatment effects, wages
    JEL: C21 I24 J15 J31 J71
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp12041&r=all
  15. By: MAO TAKONGMO, Charles Olivier
    Abstract: One important question in the Keynesian literature is whether we should detrend data when estimating the parameters of a Keynesian model using the moment method. It has been common in the literature to detrend data in the same way the model is detrended. Doing so works relatively well with linear models, in part because in such a case the information that disappears from the data after the detrending process is usually related to the parameters that also disappear from the detrended model. Unfortunately, in heavy non-linear Keynesian models, parameters rarely disappear from detrended models, but information does disappear from the detrended data. Using a simple real business cycle model, we show that both the moment method estimators of parameters and the estimated responses of endogenous variables to a technological shock can be seriously inaccurate when the data used in the estimation process are detrended. Using a dynamic stochastic general equilibrium model and U.S. data, we show that detrending the data before estimating the parameters may result in a seriously misleading response of endogeneous variables to monetary shocks. We suggest building the moment conditions using raw data, irrespective of the trend observed in the data.
    Keywords: RBC models, DSGE models, Trend.
    JEL: C12 C13 C15 E17 E51
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:91709&r=all
  16. By: Hervé Cardot (Université de Bourgogne Franche-Comté); Antonio Musolesi (Università degli Studi di Ferrara)
    Abstract: A semi-parametric approach is proposed to estimate the variation along time of the effects of two distinct public policies that were devoted to boost rural development in France over a similar period of time. At a micro data level, it is often observed that the dependent variable, such as local employment, does not vary along time, so that we face a kind of zero inflated phenomenon that cannot be dealt with a continuous response model. We introduce a conditional mixture model which combines a mass at zero and a continuous response. The suggested zero inflated semi-parametric statistical approach relies on the flexibility and modularity of additive models with the ability of panel data to deal with selection bias and to allow for the estimation of dynamic treatment effects. In this multiple treatment analysis, we find evidence of interesting patterns of temporal treatment effects with relevant nonlinear policy effects. The adopted semi-parametric modeling also offers the possibility of making a counterfactual analysis at an individual level. The methodology is illustrated and compared with parametric linear approaches on a few municipalities for which the mean evolution of the potential outcomes is estimated under the different possible treatments.
    Keywords: Additive Models; Semi-parametric Regression; Mixture of Distributions; Panel Data; Policy Evaluation; Temporal Effects; Multiple Treatments; Local Development
    JEL: C14 C23 C54 O18
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:srt:wpaper:0219&r=all
  17. By: Furuoka, Fumitaka
    Abstract: This paper proposes a new causality test or a Fisher-type causality test to examine empirically the export-growth nexus. To empirically demonstrate this new causality test procedure, the Fisher causality test is used to examine the exports-growth nexus in four Asian economies, namely Indonesia, Philippines, Hong Kong and Japan. The new causality test could detected a complex situation in the export-growth nexus in Asia. The Fisher causality test clearly pointed out that there are unidirectional causality from economic growth to exports in Indonesia, bidirectional causality between exports and economic growth in Philippines, no causality relationship between exports and economic growth in Hong Kong and Japan.
    Keywords: Exports, economic growth, causality test, Asia
    JEL: C22 F43
    Date: 2018–12–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:91467&r=all

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.