nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒03‒18
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Kernel Based Estimation of Spectral Risk Measures By Suparna Biswas; Rituparna Sen
  2. Average Derivative Estimation Under Measurement Error By Hao Dong; Taisuke Otsu; Luke Taylor
  3. Forecasting the Realized Variance in the Presence of Intraday Periodicity By Dumitru, Ana-Maria; Hizmeri, Rodrigo; Izzeldin, Marwan
  4. Estimation and inference for spatial models with heterogeneous coefficients: an application to U.S. house prices By Michele Aquaro; Natalia Bailey; M. Hashem Pesaran
  5. Shapley regressions: a framework for statistical inference on machine learning models By Joseph, Andreas
  6. Forecasting Volatility in Cryptocurrency Markets By Mawuli Segnon; Stelios Bekiros
  7. Nonparametric Homogeneity Pursuit in Functional-Coefficient Models By Jia Chen; Degui Li; Lingling Wei; Wenyang Zhang
  8. Nonparametric estimation and bootstrap inference on trends in atmospheric time series: an application to ethane By Marina Friedrich; Eric Beutner; Hanno Reuvers; Stephan Smeekes; Jean-Pierre Urbain; Whitney Bader; Bruno Franco; Bernard Lejeune; Emmanuel Mahieu
  9. Forecasting bubbles with mixed causal-noncausal autoregressive models By Voisin, Elisa; Hecq, Alain
  10. Financial Applications of Gaussian Processes and Bayesian Optimization By Joan Gonzalvez; Edmond Lezmi; Thierry Roncalli; Jiali Xu
  11. Breaking Ties: Regression Discontinuity Design Meets Market Design By Atila Abdulkadiroglu; Joshua D. Angrist; Yusuke Narita; Parag A. Pathak
  12. Some Dynamic and Steady-State Properties of Threshold Autoregressions with Applications to Stationarity and Local Explosivity By Ahmed, M. F..; Satchell, S
  13. High-dimensional sparse financial networks through a regularised regression model By Bernardi, Mauro; Costola, Michele
  14. Asymptotic F Tests under Possibly Weak Identification By Julian Martinez-Iriarte; Yixiao Sun; Xuexin Wang
  15. Reconsideration of a simple approach to quantile regression for panel data: a comment on the Canay (2011) fixed effects estimator By Galina Besstremyannaya; Sergei Golovan

  1. By: Suparna Biswas; Rituparna Sen
    Abstract: Spectral risk measures (SRMs) belongs to the family of coherent risk measures. A natural estimator for the class of spectral risk measures (SRMs) has the form of $L$-statistics. In the literature, various authors have studied and derived the asymptotic properties of the estimator of SRM using the empirical distribution function. But no such estimator of SRM is studied considering distribution function estimator other than empirical cdf. We propose a kernel based estimator of SRM. We try to investigate the large sample properties of general $L$-statistics based on i.i.d cases and apply them to our kernel based estimator of SRM. We prove that the estimator is strongly consistent and the estimator is asymptotically normal. We compare the finite sample performance of the kernel based estimator with that of empirical estimator of SRM using Monte Carlo simulation, where appropriate choice of smoothing parameter and the user's coefficient of risk aversion plays an important role. Based on our simulation study we have estimated the exponential SRM of four future index-that is Nikkei 225, Dax, FTSE 100 and Hang Seng using our proposed kernel based estimator.
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1903.03304&r=all
  2. By: Hao Dong (Southern Methodist University); Taisuke Otsu (London School of Economics and Political Science); Luke Taylor (Aarhus University)
    Abstract: In this paper, we derive the asymptotic properties of average derivative estimators when the regressors are contaminated with classical measurement error and the density of this error is unknown. Average derivatives of conditional mean functions are used extensively in economics and statistics, most notably in semiparametric index models. As well as ordinary smooth measurement error, we provide results for supersmooth error distributions. This is a particularly important class of error distribution as it includes the popular Gaussian density. We show that under this ill-posed inverse problem, despite using nonparametric deconvolution techniques and an estimated error characteristic function, we are able to achieve a \sqrt{n} rate of convergence for the average derivative estimator. Interestingly, if the measurement error density is symmetric, the asymptotic variance of the average derivative estimator is the same irrespective of whether the error density is estimated or not.
    Keywords: Average derivative estimator, deconvolution, unknown error distribution, supersmooth error.
    JEL: C14
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:smu:ecowpa:1901&r=all
  3. By: Dumitru, Ana-Maria; Hizmeri, Rodrigo; Izzeldin, Marwan
    Abstract: This paper examines the impact of intraday periodicity on forecasting realized volatility using a heterogeneous autoregressive model (HAR) framework. We show that periodicity inflates the variance of the realized volatility and biases jump estimators. This combined effect adversely affects forecasting. To account for this, we propose a periodicity-adjusted model, HARP, where predictors are built from the periodicity-filtered data. We demonstrate empirically (using 30 stocks from various business sectors and the SPY for the period 2000--2016) and via Monte Carlo simulations that the HARP models produce significantly better forecasts, especially at the 1-day and 5-days ahead horizons.
    Keywords: realized volatility,forecast,intraday periodicity,heterogeneous autoregressive models
    JEL: C14 C22 C58 G17
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:esprep:193631&r=all
  4. By: Michele Aquaro; Natalia Bailey; M. Hashem Pesaran
    Abstract: This paper considers the problem of identification, estimation and inference in the case of spatial panel data models with heterogeneous spatial lag coefficients, with and without (weakly) exogenous regressors, and subject to heteroskedastic errors. A quasi maximum likelihood (QML) estimation procedure is developed and the conditions for identification of spatial coefficients are derived. Regularity conditions are established for the QML estimators of individual spatial coefficients, as well as their means (the mean group estimators), to be consistent and asymptotically normal. Small sample properties of the proposed estimators are investigated by Monte Carlo simulations for Gaussian and non-Gaussian errors, and with spatial weight matrices of differing degrees of sparsity. The simulation results are in line with the paper's key theoretical findings even for panels with moderate time dimensions, irrespective of the number of cross section units. An empirical application to U.S. house price changes during the 1975-2014 period shows a significant degree of heterogeneity in spill-over effects over the 338 Metropolitan Statistical Areas considered.
    Keywords: spatial panel data models, heterogeneous spatial lag coefficients, identification, quasi maximum likelihood (QML) estimators, non-Gaussian errors, house price changes, Metropolitan Statistical Areas
    JEL: C21 C23
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_7542&r=all
  5. By: Joseph, Andreas (Bank of England)
    Abstract: Machine learning models often excel in the accuracy of their predictions but are opaque due to their non-linear and non-parametric structure. This makes statistical inference challenging and disqualifies them from many applications where model interpretability is crucial. This paper proposes the Shapley regression framework as an approach for statistical inference on non-linear or non-parametric models. Inference is performed based on the Shapley value decomposition of a model, a pay-off concept from cooperative game theory. I show that universal approximators from machine learning are estimation consistent and introduce hypothesis tests for individual variable contributions, model bias and parametric functional forms. The inference properties of state-of-the-art machine learning models — like artificial neural networks, support vector machines and random forests — are investigated using numerical simulations and real-world data. The proposed framework is unique in the sense that it is identical to the conventional case of statistical inference on a linear model if the model is linear in parameters. This makes it a well-motivated extension to more general models and strengthens the case for the use of machine learning to inform decisions.
    Keywords: Machine learning; statistical inference; Shapley values; numerical simulations; macroeconomics; time series
    JEL: C45 C52 C71 E47
    Date: 2019–03–08
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0784&r=all
  6. By: Mawuli Segnon; Stelios Bekiros
    Abstract: In this paper, we revisit the stylized facts of cryptocurrency markets and propose various approaches for modeling the dynamics governing the mean and variance processes. We first provide the statistical properties of our proposed models and study in detail their forecasting performance and adequacy by means of point and density forecasts. We adopt two loss functions and the model confidence set (MSC) test to evaluate the predictive ability of the models and the likelihood ratio test to assess their adequacy. Our results confirm that cryptocurrency markets are characterized by regime shifting, long memory and multifractality. We find that the Markov switching multifractal (MSM) and FIGARCH models outperform other GARCH-type models in forecasting bitcoin returns volatility. Furthermore, combined forecasts improve upon forecasts from individual models.
    Keywords: Bitcoin, Multifractal processes, GARCH processes, Model confidence set, Likelihood ratio test
    JEL: C22 C53 C58
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:cqe:wpaper:7919&r=all
  7. By: Jia Chen; Degui Li; Lingling Wei; Wenyang Zhang
    Abstract: This paper explores the homogeneity of coefficient functions in nonlinear models with functional coefficients and identifies the underlying semiparametric modelling structure. With initial kernel estimates of coefficient functions, we combine the classic hierarchical clustering method with a generalised version of the information criterion to estimate the number of clusters, each of which has a common functional coefficient, and determine the membership of each cluster. To identify a possible semi-varying coefficient modelling framework, we further introduce a penalised local least squares method to determine zero coefficients, non-zero constant coefficients and functional coefficients which vary with an index variable. Through the nonparametric kernel-based cluster analysis and the penalised approach, we can substantially reduce the number of unknown parametric and nonparametric components in the models, thereby achieving the aim of dimension reduction. Under some regularity conditions, we establish the asymptotic properties for the proposed methods including the consistency of the homogeneity pursuit. Numerical studies, including Monte-Carlo experiments and an empirical application, are given to demonstrate the finite-sample performance of our methods.
    Keywords: Functional-coefficient models, Hierarchical agglomerative clustering, Homogeneity, Information criterion, Nonparametric estimation, Penalised method
    JEL: C13 C14
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:yor:yorken:19/03&r=all
  8. By: Marina Friedrich; Eric Beutner; Hanno Reuvers; Stephan Smeekes; Jean-Pierre Urbain; Whitney Bader; Bruno Franco; Bernard Lejeune; Emmanuel Mahieu
    Abstract: Understanding the development of trends and identifying trend reversals in decadal time series is becoming more and more important. Many climatological and atmospheric time series are characterized by autocorrelation, heteroskedasticity and seasonal effects. Additionally, missing observations due to instrument failure or unfavorable measurement conditions are common in such series. This is why it is crucial to apply methods which work reliably under these circumstances. The goal of this paper is to provide a toolbox which can be used to determine the presence and form of changes in trend functions using parametric as well as nonparametric techniques. We consider bootstrap inference on broken linear trends and smoothly varying nonlinear trends. In particular, for the broken trend model, we propose a bootstrap method for inference on the break location and the corresponding changes in slope. For the smooth trend model we construct simultaneous confidence bands around the nonparametrically estimated trend. Our autoregressive wild bootstrap approach combined with a seasonal filter, is able to handle all issues mentioned above. We apply our methods to a set of atmospheric ethane series with a focus on the measurements obtained above the Jungfraujoch in the Swiss Alps. Ethane is the most abundant non-methane hydrocarbon in the Earth's atmosphere, an important precursor of tropospheric ozone and a good indicator of oil and gas production as well as transport. Its monitoring is therefore crucial for the characterization of air quality and of the transport of tropospheric pollution.
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1903.05403&r=all
  9. By: Voisin, Elisa; Hecq, Alain
    Abstract: This paper investigates one-step ahead density forecasts of mixed causal-noncausal models. We compare the sample-based and the simulations-based approaches respectively developed by Gouriéroux and Jasiak (2016) and Lanne, Luoto, and Saikkonen (2012). We focus on explosive episodes and therefore on predicting turning points of bubbles bursts. We suggest the use of both methods to construct investment strategies based on how much probabilities are induced by the assumed model and by past behaviours. We illustrate our analysis on Nickel prices series.
    Keywords: Noncausal models, forecasting, predictive densities, bubbles, simulations-based forecasts
    JEL: C22 C53 C58
    Date: 2019–03–13
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:92734&r=all
  10. By: Joan Gonzalvez; Edmond Lezmi; Thierry Roncalli; Jiali Xu
    Abstract: In the last five years, the financial industry has been impacted by the emergence of digitalization and machine learning. In this article, we explore two methods that have undergone rapid development in recent years: Gaussian processes and Bayesian optimization. Gaussian processes can be seen as a generalization of Gaussian random vectors and are associated with the development of kernel methods. Bayesian optimization is an approach for performing derivative-free global optimization in a small dimension, and uses Gaussian processes to locate the global maximum of a black-box function. The first part of the article reviews these two tools and shows how they are connected. In particular, we focus on the Gaussian process regression, which is the core of Bayesian machine learning, and the issue of hyperparameter selection. The second part is dedicated to two financial applications. We first consider the modeling of the term structure of interest rates. More precisely, we test the fitting method and compare the GP prediction and the random walk model. The second application is the construction of trend-following strategies, in particular the online estimation of trend and covariance windows.
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1903.04841&r=all
  11. By: Atila Abdulkadiroglu (Duke University); Joshua D. Angrist (MIT); Yusuke Narita (Cowles Foundation, Yale University); Parag A. Pathak (MIT)
    Abstract: Centralized school assignment algorithms must distinguish between applicants with the same preferences and priorities. This is done with randomly assigned lottery numbers, nonlottery tie-breakers like test scores, or both. The New York City public high school match illustrates the latter, using test scores, grades, and interviews to rank applicants to screened schools, combined with lottery tie-breaking at unscreened schools. We show how to identify causal effects of school attendance in such settings. Our approach generalizes regression discontinuity designs to allow for multiple treatments and multiple running variables, some of which are randomly assigned. Lotteries generate assignment risk at screened as well as unscreened schools. Centralized assignment also identi?es screened school effects away from screened school cutoffs. These features of centralized assignment are used to assess the predictive value of New York City’s school report cards. Grade A schools improve SAT math scores and increase the likelihood of graduating, though by less than OLS estimates suggest. Selection bias in OLS estimates is egregious for Grade A screened schools.
    Keywords: Causal Inference, Natural Experiment, Local Propensity Score, Instrumental Variables, Unified Enrollment, School Report Card, School Value Added
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2170&r=all
  12. By: Ahmed, M. F..; Satchell, S
    Abstract: The purpose of this paper is to investigate the dynamics and steady-state properties of threshold autoregressive models with exogenous states that follow Markovian processes; these processes are widely used in applied economics although their statistical properties have not been explored in detail. We use characteristic functions to carry out the analysis and this allows us to describe limiting distributions for processes not considered in the literature previously. We also calculate analytical expressions for some moments. Furthermore, we see that we can have locally explosive processes that are explosive in one regime whilst being strongly stationary overall. This is explored through simulation analysis where we also show how the distribution changes when the explosive state become more frequent although the overall process remains stationary. In doing so, we are able to relate our analysis to asset prices which exhibit similar distributional properties.
    Keywords: Threshold Auto-regression, Markov process
    JEL: C22 C32 C53
    Date: 2019–03–06
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1923&r=all
  13. By: Bernardi, Mauro; Costola, Michele
    Abstract: We propose a shrinkage and selection methodology specifically designed for network inference using high dimensional data through a regularised linear regression model with Spike-and-Slab prior on the parameters. The approach extends the case where the error terms are heteroscedastic, by adding an ARCH-type equation through an approximate Expectation-Maximisation algorithm. The proposed model accounts for two sets of covariates. The first set contains predetermined variables which are not penalised in the model (i.e., the autoregressive component and common factors) while the second set of variables contains all the (lagged) financial institutions in the system, included with a given probability. The financial linkages are expressed in terms of inclusion probabilities resulting in a weighted directed network where the adjacency matrix is built "row by row". In the empirical application, we estimate the network over time using a rolling window approach on 1248 world financial firms (banks, insurances, brokers and other financial services) both active and dead from 29 December 2000 to 6 October 2017 at a weekly frequency. Findings show that over time the shape of the out degree distribution exhibits the typical behavior of financial stress indicators and represents a significant predictor of market returns at the first lag (one week) and the fourth lag (one month).
    Keywords: VAR estimation,Financial Networks,Bayesian inference,Sparsity,Spike-and-Slab prior,Stochastic Search Variable Selection,Expectation-Maximisation
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:safewp:244&r=all
  14. By: Julian Martinez-Iriarte; Yixiao Sun; Xuexin Wang
    Abstract: This paper develops asymptotic F tests robust to weak identification and temporal dependence. The test statistics are modified versions of the S statistic of Stock and Wright (2000) and the K statistic of Kleibergen (2005), both of which are based on the continuous updating generalized method of moments. In the former case, the modification involves only a multiplicative degree-of-freedom adjustment. In the latter case, the modification involves an additional multiplicative adjustment that uses a J statistic for testing overidentification. By adopting fixed-smoothing asymptotics, we show that both the modified S statistic and the modified K statistic are asymptotically F-distributed. The asymptotic F theory accounts for the estimation errors in the underlying heteroskedasticity and autocorrelation robust variance estimators, which the asymptotic chi-squared theory ignores. Monte Carlo simulations show that the F approximations are much more accurate than the corresponding chi-squared approximations in finite samples.
    Keywords: Heteroskedasticity and autocorrelation robust variance, continuous updating GMM, F distribution, fixed-smoothing asymptotics, weak identification
    JEL: C12 C14 C32 C36
    Date: 2019–03–12
    URL: http://d.repec.org/n?u=RePEc:wyi:wpaper:002400&r=all
  15. By: Galina Besstremyannaya (Centre for Economic and Financial Research at New Economic School); Sergei Golovan (New Economic School)
    Abstract: Estimation of individual effects in quantile regression can be difficult in large panel datasets, but a solution is apparently offered by a computationally simple estimator by Ivan Canay (2011, The Econometrics Journal) for quantile-independent individual effects. The Canay estimator is widely used by practitioners and is often cited in the theoretical literature. However, our paper discusses two fallacies in Canay's approach. We formally prove that Canay's assumptions can entail severe bias or even non-existence of the limiting distribution for the estimator of the vector of coefficients, leading to incorrect inference. A second problem is incorrect asymptotic standard error of the estimator of the constant term. In an attempt to improve Canay's estimator, we propose a simple correction which may reduce the bias. Regarding the constant term, we focus on the fact that finding a sqrt(nT) consistent first step estimator may be problematic. Finally, we give recommendations to practitioners in terms of different values of n=T, and conduct a meta-review of applied papers, which use Canay's estimator.
    Keywords: Quantile regression, Panel data, Fixed effects, Inference
    JEL: C21 C23
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:cfr:cefirw:w0249&r=all

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.