
on Econometrics 
By:  Lux, Thomas 
Abstract:  Estimation of agentbased models is currently an intense area of research. Recent contributions have to a large extent resorted to simulationbased methods mostly using some form of simulated method of moments estimation (SMM). There is, however, an entire branch of statistical methods that should appear promising, but has to our knowledge never been applied so far to estimate agentbased models in economics and finance: Markov chain Monte Carlo methods designed for state space models or models with latent variables. This later class of models seems particularly relevant as agentbased models typically consist of some latent and some observable variables since not all the characteristics of agents would mostly be observable. Indeed, one might often not only be interested in estimating the parameters of a model, but also to infer the time development of some latent variable. However, agentbased models when interpreted as latent variable models would be typically characterized by nonlinear dynamics and nonGaussian fluctuations and, thus, would require a computational approach to statistical inference. Here we resort to Sequential Monte Carlo (SMC) estimation based on a particle filter. This approach is used here to numerically approximate the conditional densities that enter into the likelihood function of the problem. With this approximation we simultaneously obtain parameter estimates and filtered state probabilities for the unobservable variable(s) that drive(s) the dynamics of the observable time series. In our examples, the observable series will be asset returns (or prices) while the unobservable variables will be some measure of agents' aggregate sentiment. We apply SMC to two selected agentbased models of speculative dynamics with somewhat different flavor. The empirical application to a selection of financial data includes an explicit comparison of the goodnessoffit of both models. 
Keywords:  agentbased models,estimation,Markov chain Monte Carlo,particle filter 
JEL:  G12 C15 C58 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:zbw:cauewp:201707&r=ecm 
By:  Burak Eroglu (Istanbul Bilgi University); Kemal Caglar Gogebakan (Bilkent University); Mirza Trokic (Bilkent University and IHS Markit) 
Abstract:  This paper introduces a nonparametric variance ratio testing procedure for seasonal unit roots by utilizing the fractional integration operator. This procedure includes unit root tests at zero, Nyquist, harmonic and joint frequencies. Different from the widely used seasonal unit root tests of Hylleberg et al. (1990)[HEGY], the proposed tests are free from any nuisance and tuning parameters. Furthermore, we develop a new bootstrap technique for the fractional seasonal variance ratio tests by utilizing wavelet filters. This technique allows the practitioners to test for the seasonal unit roots without estimating a parametric regression model. The Monte Carlo simulation evidence reveals that, our proposed fractional seasonal variance ratio [FSVR] tests and the wavelet based bootstrap counterparts have desirable size and power properties. 
Keywords:  Seasonal unit roots; Fractional integration; Wavelets; Wavestrapping 
JEL:  C14 C22 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:bli:wpaper:1707&r=ecm 
By:  Frölich, Markus; Huber, Martin 
Abstract:  This paper proposes a fully nonparametric kernel method to account for observed covariates in regression discontinuity designs (RDD), which may increase precision of treatment effect estimation. It is shown that conditioning on covariates reduces the asymptotic variance and allows estimating the treatment effect at the rate of onedimensional nonparametric regression, irrespective of the dimension of the continuously distributed elements in the conditioning set. Furthermore, the proposed method may decrease bias and restore identification by controlling for discontinuities in the covariate distribution at the discontinuity threshold, provided that all relevant discontinuously distributed variables are controlled for. To illustrate the estimation approach and its properties, we provide a simulation study and an empirical application to an Austrian labor market reform. 
Keywords:  Treatment effect ; causal effect ; complier ; LATE ; nonparametric regression ; endogeneity 
JEL:  C13 C14 C21 
Date:  2017–11–20 
URL:  http://d.repec.org/n?u=RePEc:fri:fribow:fribow00489&r=ecm 
By:  Dou, Baojun; Parrella, Maria Lucia; Yao, Qiwei 
Abstract:  We consider a class of spatiotemporal models which extend popular econometric spatial autoregressive panel data models by allowing the scalar coefficients for each location (or panel) different from each other. To overcome the innate endogeneity, we propose a generalized Yule–Walker estimation method which applies the least squares estimation to a Yule–Walker equation. The asymptotic theory is developed under the setting that both the sample size and the number of locations (or panels) tend to infinity under a general setting for stationary and αmixing processes, which includes spatial autoregressive panel data models driven by i.i.d. innovations as special cases. The proposed methods are illustrated using both simulated and real data. 
Keywords:  αmixing; dynamic panels; high dimensionality; least squares estimation; spatial autoregression; stationarity 
JEL:  C13 C23 C32 
Date:  2016–10–01 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:67151&r=ecm 
By:  Tue Gorgens; Sanghyeok Lee 
Abstract:  In this paper we consider estimation of dynamic models of recurring events (event histories) in continuous time using censored data. We develop maximum simulated likelihood estimators where missing data are integrated out using Monte Carlo and importance sampling methods. We allow for random e ects and integrate out the unobserved heterogeneity using a quadrature rule. In Monte Carlo experiments, we nd that maximum simulated likelihood estimation is practically feasible and performs better than both listwise deletion and auxiliary modelling of initial conditions. 
Keywords:  Duration analysis; survival analysis; failuretime analysis; reliability analysis; event history analysis; hazard rates; data censoring; panel data; initial conditions; random e ects; maximum simulated likelihood; Monte Carlo integration; importance sampling. 
JEL:  C33 C41 C51 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:acb:cbeeco:2017655&r=ecm 
By:  Güriş, Burak 
Abstract:  Traditional unit root tests display a tendency to be nonstationary in the case of structural breaks and nonlinearity. To eliminate this problem this paper proposes a new flexible Fourier form nonlinear unit root test. This test eliminates this problem to add structural breaks and nonlinearity together to the test procedure. In this test procedure, structural breaks are modeled by means of a Fourier function and nonlinear adjustment is modeled by means of an Exponential Smooth Threshold Autoregressive (ESTAR) model. The simulation results indicate that the proposed unit root test is more powerful than the Kruse (2011) and KSS(2003) tests. 
Keywords:  Flexible Fourier Form, Unit Root Test, Nonlinearity 
JEL:  C12 C22 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:82260&r=ecm 
By:  Susan Athey; Mohsen Bayati; Nikolay Doudchenko; Guido Imbens; Khashayar Khosravi 
Abstract:  In this paper we develop new methods for estimating causal effects in settings with panel data, where a subset of units are exposed to a treatment during a subset of periods, and the goal is estimating counterfactual (untreated) outcomes for the treated unit/period combinations. We develop a class of estimators that uses the observed elements of the matrix of control outcomes corresponding to untreated unit/periods to predict the "missing" elements of the matrix, corresponding to treated units/periods. The approach estimates a matrix that wellapproximates the original (incomplete) matrix, but has lower complexity according to a matrix norm, where we consider the family of Schatten norms based on the singular values of the matrix. The proposed methods have attractive computational properties. From a technical perspective, we generalize results from the matrix completion literature by allowing the patterns of missing data to have a time series dependency structure. We also present new insights concerning the connections between the interactive fixed effects models and the literatures on program evaluation under unconfoundedness as well as on synthetic control methods. If there are few time periods and many units, our method approximates a regression approach where counterfactual outcomes are estimated through a regression of current outcomes on lagged outcomes for the same unit. In contrast, if there are few units and many periods, our proposed method approximates a synthetic control estimator where counterfactual outcomes are estimated through a regression of the lagged outcomes for the treated unit on lagged outcomes for the control units. The advantage of our proposed method is that it moves seamlessly between these two different approaches, utilizing both crosssectional and withinunit patterns in the data. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1710.10251&r=ecm 
By:  Li, Dong; Tong, Howell 
Abstract:  Threshold models have been popular for modelling nonlinear phenomena in diverse areas, in part due to their simple fitting and often clear model interpretation. A commonly used approach to fit a threshold model is the (conditional) least squares method, for which the standard grid search typically requires O(n) operations for a sample of size n; this is substantial for large n, especially in the context of panel time series. This paper proposes a novel method, the nested subsample search algorithm, which reduces the number of least squares operations drastically to O(log n) for large sample size. We demonstrate its speed and reliability via Monte Carlo simulation studies with finite samples. Possible extension to maximum likelihood estimation is indicated. 
Keywords:  Least squares estimation; maximum likelihood estimation; nested subsample search algorithm; standard grid search algorithm; threshold model 
JEL:  C1 
Date:  2016–10–01 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:68880&r=ecm 
By:  Lopes Moreira Da Veiga, María Helena; Gonçalves Mazzeu, Joao Henrique; Mariti, Massimo B. 
Abstract:  This paper models and forecasts the crude oil ETF volatility index (OVX). Themotivation lies on the evidence that the OVX has been used in the last years as an important alternative measure to track and analyze the volatility of future oil prices. The analysis of the OVX suggests that it presents similar features to those of the daily market volatility index. The main characteristic is the long range dependence that is modeled either by autoregressive fractional integrated moving averaging (ARFIMA) models or by heterogeneous autoregressive (HAR) specifications. Regarding the latter family of models, we first propose extensions of the HAR model that are based on the net and scale measures of oil prices changes. The aim is to improve the HAR model by including predictors that better capture the impact of oil price changes on the economy. Second, we test the forecasting performance of the new proposals and benchmarks with the model confidence set (MCS) and the GeneralizedAutoContouR (GACR) tests interms of point forecasts and density forecasting, respectively. Our main findings are as follows: the new asymmetric proposals have superior predictive ability than the heterogeneous autoregressive leverage (HARL) model under two known loss functions. Regarding density forecasting, the best model is the one that includes the scale measureas a proxy of oil price changes and considers a flexible distribution for the errors. 
Keywords:  Scale oil price changes; OVX; Net oil price changes; Forecasting OVX; Leverage; Heterogeneous autoregression 
JEL:  C53 C52 C51 Q40 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:25985&r=ecm 
By:  Demetrescu, Matei; Leppin, Julian Sebastian; Reitz, Stefan 
Abstract:  (Panel) Smooth Transition Regressions substantially gained in popularity due to their flexibility in modeling regression coefficients as homogeneous or heterogeneous functions of transition variables. In the estimation process, however, researchers typically face a tradeoff in the sense that a single (homogeneous) transition function may yield biased estimates if the true model is heterogeneous, while the latter specification is accompanied by convergence problems and longer estimation time, rendering their application less appealing. This paper proposes a Lagrange multiplier test indicating whether the homogeneous smooth transition regression model is appropriate against the competing heterogeneous alternative. The empirical size and power of the test are evaluated by Monte Carlo simulations. 
Keywords:  STR model,multivariate,nonlinear models,testing 
JEL:  C52 C22 C12 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:zbw:ifwkwp:2094&r=ecm 
By:  Lester Mackey; Vasilis Syrgkanis; Ilias Zadik 
Abstract:  Double machine learning provides $\sqrt{n}$consistent estimates of parameters of interest even when highdimensional or nonparametric nuisance parameters are estimated at an $n^{1/4}$ rate. The key is to employ \emph{Neymanorthogonal} moment equations which are firstorder insensitive to perturbations in the nuisance parameters. We show that the $n^{1/4}$ requirement can be improved to $n^{1/(2k+2)}$ by employing a $k$th order notion of orthogonality that grants robustness to more complex or higherdimensional nuisance parameters. In the partially linear model setting popular in causal inference, we use Stein's lemma to show that we can construct secondorder orthogonal moments if and only if the treatment residual is not normally distributed. We conclude by demonstrating the robustness benefits of an explicit doublyorthogonal estimation procedure for treatment effect. 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1711.00342&r=ecm 
By:  Vojnovic, Milan; Yun, Seyoung 
Abstract:  We consider the maximum likelihood parameter estimation problem for a generalized Thurstone choice model, where choices are from comparison sets of two or more items. We provide tight characterizations of the mean square error, as well as necessary and sufficient conditions for correct classification when each item belongs to one of two classes. These results provide insights into how the estimation accuracy depends on the choice of a generalized Thurstone choice model and the structure of comparison sets. We find that for a priori unbiased structures of comparisons, e.g., when comparison sets are drawn independently and uniformly at random, the number of observations needed to achieve a prescribed estimation accuracy depends on the choice of a generalized Thurstone choice model. For a broad set of generalized Thurstone choice models, which includes all popular instances used in practice, the estimation error is shown to be largely insensitive to the cardinality of comparison sets. On the other hand, we found that there exist generalized Thurstone choice models for which the estimation error decreases much faster with the cardinality of comparison sets. 
JEL:  C1 
Date:  2016–06–20 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:85703&r=ecm 
By:  McCracken, Michael W. (Federal Reserve Bank of St. Louis); McGillicuddy, Joseph (Federal Reserve Bank of St. Louis) 
Abstract:  When constructing unconditional point forecasts, both direct and iteratedmultistep (DMS and IMS) approaches are common. However, in the context of producing conditional forecasts, IMS approaches based on vector autoregressions (VAR) are far more common than simpler DMS models. This is despite the fact that there are theoretical reasons to believe that DMS models are more robust to misspecification than are IMS models. In the context of unconditional forecasts, Marcellino, Stock, and Watson (MSW, 2006) investigate the empirical relevance of these theories. In this paper, we extend that work to conditional forecasts. We do so based on linear bivariate and trivariate models estimated using a large dataset of macroeconomic time series. Over comparable samples, our results reinforce those in MSW: the IMS approach is typically a bit better than DMS with significant improvements only at longer horizons. In contrast, when we focus on the Great Moderation sample we find a marked improvement in the DMS approach relative to IMS. The distinction is particularly clear when we forecast nominal rather than real variables where the relative gains can be substantial. 
Keywords:  Prediction; forecasting; outofsample 
JEL:  C12 C32 C52 C53 
Date:  2017–11–01 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2017040&r=ecm 
By:  Burak Eroglu (Istanbul Bilgi University) 
Abstract:  In this paper, I propose a wavelet based cointegration test for fractionally integrated time series. This proposed test is nonparametric and asymptotically invariant to different forms of short run dynamics. The use of wavelets allows one to take advantage of the wavelet based bootstrapping method particularly known as wavestrapping. In this regard, I introduce a new wavestrapping algorithm for multivariate time series processes, specifically for cointegration tests. The Monte Carlo simulations indicate that this new wavestrapping procedure can alleviate the severe size distortions which are generally observed in cointegration tests with time series containing innovations that possess highly negative MA parameters. Additionally, I apply the proposed methodology to analyse the long run comovements in the credit default swap market of European Union countries. 
Keywords:  Fractional integration; Cointegration; Wavelet; Wavestrapping 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:bli:wpaper:1706&r=ecm 
By:  Francisco (F.) Blasques (VU Amsterdam; Tinbergen Institute, The Netherlands); Andre (A.) Lucas (VU Amsterdam; Tinbergen Institute, The Netherlands); Andries van Vlodrop (VU Amsterdam; Tinbergen Institute, The Netherlands) 
Abstract:  We study optimality properties in finite samples for timevarying volatility models driven by the score of the predictive likelihood function. Available optimality results for this class of models suffer from two drawbacks. First, they are only asymptotically valid when evaluated at the pseudotrue parameter. Second, they only provide an optimality result `on average' and do not provide conditions under which such optimality prevails. We show in a finite sample setting that scoredriven volatility models have optimality properties when they matter most. Scoredriven models perform best when the data is fattailed and robustness is important. Moreover, they perform better when filtered volatilities differ most across alternative models, such as in periods of financial distress. These results are confirmed by an empirical application based on U.S. stock returns. 
Keywords:  Volatility models; scoredriven dynamics; finite samples; KullbackLeibler divergence; optimality. 
JEL:  C01 C18 C20 
Date:  2017–11–24 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20170111&r=ecm 
By:  Robinson, Peter; Taylor, Luke 
Abstract:  This article develops statistical methodology for semiparametric models for multiple time series of possibly high dimension N. The objective is to obtain precise estimates of unknown parameters (which characterize autocorrelations and crossautocorrelations) without fully parameterizing other distributional features, while imposing a degree of parsimony to mitigate a curse of dimensionality. The innovations vector is modelled as a linear transformation of independent but possibly nonidentically distributed random variables, whose distributions are nonparametric. In such circumstances, Gaussian pseudomaximum likelihood estimates of the parameters are typically √nconsistent, where n denotes series length, but asymptotically inefficient unless the innovations are in fact Gaussian. Our parameter estimates, which we call ‘adaptive,’ are asymptotically as firstorder efficient as maximum likelihood estimates based on correctly specified parametric innovations distributions. The adaptive estimates use nonparametric estimates of score functions (of the elements of the underlying vector of independent random varables) that involve truncated expansions in terms of basis functions; these have advantages over the kernelbased score function estimates used in most of the adaptive estimation literature. Our parameter estimates are also √n consistent and asymptotically normal. A Monte Carlo study of finite sample performance of the adaptive estimates, employing a variety of parameterizations, distributions and choices of N, is reported. 
JEL:  J1 
Date:  2017–02–08 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:68345&r=ecm 
By:  Giannone, Domenico (Federal Reserve Bank of New York); Lenza, Michele (European Central Bank and ECARES); Primiceri, Giorgio E. (Northwestern University, CEPR, and NBER) 
Abstract:  We propose a class of prior distributions that discipline the longrun predictions of vector autoregressions (VARs). These priors can be naturally elicited using economic theory, which provides guidance on the joint dynamics of macroeconomic time series in the long run. Our priors for the long run are conjugate, and can thus be easily implemented using dummy observations and combined with other popular priors. In VARs with standard macroeconomic variables, a prior based on the longrun predictions of a wide class of theoretical models yields substantial improvements in the forecasting performance. 
Keywords:  Bayesian vector autoregression; forecasting; overfitting; initial conditions; hierarchical model 
JEL:  C11 C32 C33 E37 
Date:  2017–11–01 
URL:  http://d.repec.org/n?u=RePEc:fip:fednsr:832&r=ecm 
By:  Wayne Yuan Gao 
Abstract:  We consider an index model of dyadic link formation with a homophily effect index and a degree heterogeneity index. We provide nonparametric identification results in a single large network setting for the potentially nonparametric homophily effect function, the unobserved individual fixed effects and the unknown distribution of idiosyncratic pairwise shocks, up to normalization for each possible true value of the unknown parameters. Departing from the popular practice of restricting the norm of unknown parameters to be unity, we instead impose scale normalization on an arbitrary interquantile range, which proves particularly convenient for characterizing the identification relationships in our model, as quantiles provide direct linkages between the observable conditional probabilities and the unknown index values. We then use an inductive "infill" and "outexpansion" algorithm to establish our main identification results. We also provide a formal analysis of normalization that is without loss of generality in a precise sense, and discuss its implications concerning the interpretation of the results and counterfactual analyses of the model. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1710.11230&r=ecm 
By:  Yuriy Gorodnichenko; Byoungchan Lee 
Abstract:  We propose and study properties of several estimators of variance decomposition in the localprojections framework. We find for empirically relevant sample sizes that, after being bias corrected with bootstrap, our estimators perform well in simulations. We also illustrate the workings of our estimators empirically for monetary policy and productivity shocks. 
JEL:  C53 E37 E47 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:23998&r=ecm 