
on Econometrics 
By:  Torben G. Andersen (Northwestern University and CREATES); Rasmus T. Varneskov (Copenhagen Business School and CREATES) 
Abstract:  This paper studies the properties of standard predictive regressions in model economies, characterized through persistent vector autoregressive dynamics for the state variables and the associated series of interest. In particular, we consider a setting where all, or a subset, of the variables may be fractionally integrated, and note that this induces a spurious regression problem. We then propose a new inference and testing procedure  the local spectrum (LCM) approach  for the joint significance of the regressors, which is robust against the variables having different integration orders. The LCM procedure is based on (semi)parametric fractionalfiltering and band spectrum regression using a suitably selected set of frequency ordinates. We establish the asymptotic properties and explain how they differ from and extend existing procedures. Using these new inference and testing techniques, we explore the implications of assuming VAR dynamics in predictive regressions for the realized return variation. Standard least squares predictive regressions indicate that popular financial and macroeconomic variables carry valuable information about return volatility. In contrast, we find no significant evidence using our robust LCM procedure, indicating that prior conclusions may be premature. In fact, if anything, our results suggest the reverse causality, i.e., rising volatility predates adverse innovations to key macroeconomic variables. Simulations are employed to illustrate the relevance of the theoretical arguments for finitesample inference. 
Keywords:  Endogeneity Bias, Fractional Integration, Frequency Domain Inference, Hypothesis Testing, Spurious Inference, Stochastic Volatility, VAR Models 
JEL:  C13 C14 C32 C52 C53 G12 
Date:  2018–02–27 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201809&r=all 
By:  Torben G. Andersen (Northwestern University and CREATES); Nicola Fusari (The Johns Hopkins University Carey Business School); Viktor Todorov (Northwestern University); Rasmus T. Varneskov (Northwestern University and CREATES) 
Abstract:  We develop parametric inference procedures for large panels of noisy option data in the setting where the underlying process is of purejump type, i.e., evolve only through a sequence of jumps. The panel consists of options written on the underlying asset with a (different) set of strikes and maturities available across observation times. We consider the asymptotic setting in which the crosssectional dimension of the panel increases to infinity while its time span remains fixed. The information set is further augmented with highfrequency data on the underlying asset. Given a parametric specification for the riskneutral asset return dynamics, the option prices are nonlinear functions of a timeinvariant parameter vector and a timevarying latent state vector (or factors). Furthermore, noarbitrage restrictions impose a direct link between some of the quantities that may be identified from the return and option data. These include the socalled jump activity index as well as the timevarying jump intensity. We propose penalized least squares estimation in which we minimize L_2 distance between observed and modelimplied options and further penalize for the deviation of modelimplied quantities from their modelfree counterparts measured via the highfrequency returns. We derive the joint asymptotic distribution of the parameters, factor realizations and highfrequency measures, which is mixed Gaussian. The different components of the parameter and state vector can exhibit different rates of convergence depending on the relative informativeness of the highfrequency return data and the option panel. 
Keywords:  Inference, Jump Activity, Large Data Sets, Nonlinear Factor Model, Options, Panel Data, Stable Convergence, Stochastic Jump Intensity 
JEL:  C51 C52 G12 
Date:  2018–01–10 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201804&r=all 
By:  Torben G. Andersen (Northwestern University and CREATES); Nicola Fusari (The Johns Hopkins University Carey Business School); Viktor Todorov (Northwestern University); Rasmus T. Varneskov (Northwestern University and CREATES) 
Abstract:  We provide unifying inference theory for parametric nonlinear factor models based on a panel of noisy observations. The panel has a large crosssection and a time span that may be either small or large. Moreover, we incorporate an additional source of information provided by noisy observations on some known functions of the factor realizations. The estimation is carried out via penalized least squares, i.e., by minimizing the L_2 distance between observations from the panel and their modelimplied counterparts, augmented by a penalty for the deviation of the extracted factors from the noisy signals for them. When the time dimension is fixed, the limit distribution of the parameter vector is mixed Gaussian with conditional variance depending on the path of the factor realizations. On the other hand, when the time span is large, the convergence rate is faster and the limit distribution is Gaussian with a constant variance. In this case, however, we incur an incidental parameter problem since, at each point in time, we need to recover the concurrent factor realizations. This leads to an asymptotic bias that is absent in the setting with a fixed time span. In either scenario, the limit distribution of the estimates for the factor realizations is mixed Gaussian, but is related to the limiting distribution of the parameter vector only in the scenario with a fixed time horizon. Although the limit behavior is very different for the small versus large time span, we develop a feasible inference theory that applies, without modification, in either case. Hence, the user need not take a stand on the relative size of the time dimension of the panel. Similarly, we propose a timevarying datadriven weighting of the penalty in the objective function, which enhances effciency by adapting to the relative quality of the signal for the factor realizations. 
Keywords:  Asymptotic Bias, Incidental Parameter Problem, Inference, Large Data Sets, Nonlinear Factor Model, Options, Panel Data, Stable Convergence, Stochastic Volatility 
JEL:  C51 C52 G12 
Date:  2018–01–10 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201803&r=all 
By:  JeanJacques Forneron 
Abstract:  This paper proposes a Sieve Simulated Method of Moments (SieveSMM) estimator for the parameters and the distribution of the shocks in nonlinear dynamic models where the likelihood and the moments are not tractable. An important concern with SMM, which matches sample with simulated moments, is that a parametric distribution is required but economic quantities that depend on this distribution, such as welfare and assetprices, can be sensitive to misspecification. The SieveSMM estimator addresses this issue by flexibly approximating the distribution of the shocks with a Gaussian and tails mixture sieve. The asymptotic framework provides consistency, rate of convergence and asymptotic normality results, extending existing sieve estimation theory to a new framework with more general dynamics and latent variables. MonteCarlo simulations illustrate the finite sample properties of the estimator. Two empirical applications highlight the importance of the distribution of the shocks for estimates and counterfactuals. 
Date:  2019–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1902.01456&r=all 
By:  Liang, Chong; Schienle, Melanie 
Abstract:  We provide a shrinkage type methodology which allows for simultaneous model selection and estimation of vector error correction models (VECM) when the dimension is large and can increase with sample size. Model determination is treated as a joint selection problem of cointegrating rank and autoregressive lags under respective practically valid sparsity assumptions. We show consistency of the selection mechanism by the resulting LassoVECM estimator under very general assumptions on dimension, rank and error terms. Moreover, with computational complexity of a linear programming problem only, the procedure remains computationally tractable in high dimensions. We demonstrate the effectiveness of the proposed approach by a simulation study and an empirical application to recent CDS data after the financial crisis. 
Keywords:  Highdimensional time series,VECM,Cointegration rank and lag selection,Lasso,Credit Default Swap 
JEL:  C32 C52 
Date:  2019 
URL:  http://d.repec.org/n?u=RePEc:zbw:kitwps:124&r=all 
By:  Yunus Emre Ergemen (Aarhus University and CREATES); Carlos Velasco (Universidad Carlos III de Madrid) 
Abstract:  We consider large N,T panel data models with fixed effects, a common factor allowing for crosssection dependence, and persistent data and shocks, which are assumed fractionally integrated. In a basic setup, the main interest is on the fractional parameter of the idiosyncratic component, which is estimated in first differences after factor removal by projection on the crosssection average. The pooled conditionalsumofsquares estimate is rootNT consistent but the normal asymptotic distribution might not be centered, requiring the time series dimension to grow faster than the crosssection size for correction. We develop tests of homogeneity of dynamics, including the degree of integration, that have no trivial power under local departures from the null hypothesis of a nonnegligible fraction of crosssection units. A simulation study shows that our estimates and test have good performance even in moderately small panels. 
Keywords:  Fractional integration, panel data, factor models, long memory, homogeneity test 
JEL:  C22 C23 
Date:  2018–03–12 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201811&r=all 
By:  Niels Haldrup (Aarhus University and CREATES); Carsten P. T. Rosenskjold (Aarhus University and CREATES) 
Abstract:  The prototypical LeeCarter mortality model is characterized by a single common time factor that loads differently across age groups. In this paper we propose a factor model for the term structure of mortality where multiple factors are designed to influence the age groups differently via parametric loading functions. We identify four different factors: a factor common for all age groups, factors for infant and adult mortality, and a factor for the "accident hump" that primarily affects mortality of relatively young adults and late teenagers. Since the factors are identified via restrictions on the loading functions, the factors are not designed to be orthogonal but can be dependent and can possibly cointegrate when the factors have unit roots. We suggest two estimation procedures similar to the estimation of the dynamic NelsonSiegel term structure model. First, a twostep nonlinear least squares procedure based on crosssection regressions together with a separate model to estimate the dynamics of the factors. Second, we suggest a fully specified model estimated by maximum likelihood via the Kalman filter recursions after the model is put on state space form. We demonstrate the methodology for US and French mortality data. We find that the model provides a good fitt of the relevant factors and in a forecast comparison with a range of benchmark models it is found that, especially for longer horizons, variants of the parametric factor model have excellent forecast performance. 
Keywords:  Mortality Forecasting, Term Structure of Mortality, Factor Modelling, Cointegration 
JEL:  C1 C22 J10 J11 G22 
Date:  2018–01–12 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201806&r=all 
By:  Su, Liangjun (School of Economics, Singapore Management University); Miao, Ke (School of Economics, Singapore Management University); Jin, Sainan (School of Economics, Singapore Management University) 
Abstract:  We consider the estimation and inference in approximate factor models with random missing values. We show that with the low rank structure of the common component, we can estimate the factors and factor loadings consistently with the missing values replaced by zeros. We establish the asymptotic distributions of the resulting estimators and those based on the EM algorithm. We also propose a cross validationbased method to determine the number of factors in factor models with or without missing values and justify its consistency. Simulations demonstrate that our cross validation method is robust to fat tails in the error distribution and significantly outperforms some existing popular methods in terms of correct percentage in determining the number of factors. An application to the factoraugmented regression models shows that a proper treatment of the missing values can improve the outofsample forecast of some macroeconomic variables. 
Keywords:  Crossvalidation; ExpectationMaximization (EM) algorithm; Factor models; Matrix completion; Missing at random; Principal component analysis; Singular value decomposition 
JEL:  C23 C33 C38 
Date:  2019–01–15 
URL:  http://d.repec.org/n?u=RePEc:ris:smuesw:2019_004&r=all 
By:  Conrad, Christian; Schienle, Melanie 
Abstract:  We consider the problem of testing for an omitted multiplicative longterm component in GARCHtype models. Under the alternative there is a twocomponent model with a shortterm GARCH component that fluctuates around a smoothly timevarying longterm component which is driven by the dynamics of an explanatory variable. We suggest a Lagrange Multiplier statistic for testing the null hypothesis that the variable has no explanatory power. We derive the asymptotic theory for our test statistic and investigate its finite sample properties by MonteCarlo simulation. Our test also covers the mixedfrequency case in which the returns are observed at a higher frequency than the explanatory variable. The usefulness of our procedure is illustrated by empirical applications to S&P 500 return data. 
Keywords:  GARCHMIDAS,LM test,LongTerm Volatility,MixedFrequency Data,Volatility Component Models 
JEL:  C53 C58 E32 G12 
Date:  2019 
URL:  http://d.repec.org/n?u=RePEc:zbw:kitwps:121&r=all 
By:  Changli He (Tianjin University of Finance and Economics); Jian Kang (Tianjin University of Finance and Economics); Timo Teräsvirta (CREATES and Aarhus University, C.A.S.E, HumboldtUniversität zu Berlin); Shuhua Zhang (Tianjin University of Finance and Economics) 
Abstract:  In this paper we introduce an autoregressive model with seasonal dummy variables in which coefficients of seasonal dummies vary smoothly and deterministically over time. The error variance of the model is seasonally heteroskedastic and multiplicatively decomposed, the decomposition being similar to that in well known ARCH and GARCH models. This variance is also allowed to be smoothly and deterministically timevarying. Under regularity conditions, consistency and asymptotic normality of the maximum likelihood estimators of parameters of this model is proved. A test of constancy of the seasonal coefficients is derived. The test is generalised to specifying the parametric structure of the model. A test of constancy over time of the heteroskedastic error variance is presented. The purpose of building this model is to use it for describing changing seasonality in the wellknown monthly central England temperature series. More specifically, the idea is to find out in which way and by how much the monthly temperatures are varying over time during the period of more than 240 years, if they do. Misspecification tests are applied to the estimated model and the findings discussed. 
Keywords:  global warming, nonlinear time series, changing seasonality, smooth transition, testing constancy 
JEL:  C22 C51 C52 Q54 
Date:  2018–04–25 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201815&r=all 
By:  Torben G. Andersen (Northwestern University, NBER, and CREATES); Martin Thyrsgaard (Aarhus University and CREATES); Viktor Todorov (Northwestern University) 
Abstract:  We develop a nonparametric test for deciding whether return volatility exhibits timevarying intraday periodicity using a long timeseries of highfrequency data. Our null hypothesis, commonly adopted in work on volatility modeling, is that volatility follows a stationary process combined with a constant timeofday periodic component. We first construct timeofday volatility estimates and studentize the highfrequency returns with these periodic components. If the intraday volatility periodicity is invariant over time, then the distribution of the studentized returns should be identical across the trading day. Consequently, the test is based on comparing the empirical characteristic function of the studentized returns across the trading day. The limit distribution of the test depends on the error in recovering volatility from discrete return data and the empirical process error associated with estimating volatility moments through their sample counterparts. Critical values are computed via easytoimplement simulation. In an empirical application to S&P 500 index returns, we find strong evidence for variation in the intraday volatility pattern driven in part by the current level of volatility. When market volatility is elevated, the period preceding the market close constitutes a significantly higher fraction of the total daily integrated volatility than is the case during low market volatility regimes. 
Keywords:  highfrequency data, periodicity, semimartingale, specification test, stochastic volatility 
JEL:  C51 C52 G12 
Date:  2018–01–12 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201805&r=all 
By:  Alexander Heinemann 
Abstract:  This paper studies the joint inference on conditional volatility parameters and the innovation moments by means of bootstrap to test for the existence of moments for GARCH(p,q) processes. We propose a residual bootstrap to mimic the joint distribution of the quasimaximum likelihood estimators and the empirical moments of the residuals and also prove its validity. A bootstrapbased test for the existence of moments is proposed, which provides asymptotically correctlysized tests without losing its consistency property. It is simple to implement and extends to other GARCHtype settings. A simulation study demonstrates the test's size and power properties in finite samples and an empirical application illustrates the testing approach. 
Date:  2019–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1902.01808&r=all 
By:  Ulrich Hounyo (University at Albany  State University of New York and CREATES); Rasmus T. Varneskov (Copenhagen Business School and CREATES) 
Abstract:  We study inference for the local innovations of It^o semimartingales. Specifically, we construct a resampling procedure for the empirical CDF of highfrequency innovations that have been standardized using a nonparametric estimate of its stochastic scale (volatility) and truncated to rid the effect of "large" jumps. Our locally dependent wild bootstrap (LDWB) accommodate issues related to the stochastic scale and jumps as well as account for a special blockwise dependence structure induced by sampling errors. We show that the LDWB replicates first and secondorder limit theory from the usual empirical process and the stochastic scale estimate, respectively, as well as an asymptotic bias. Moreover, we design the LDWB sufficiently general to establish asymptotic equivalence between it and and a nonparametric local block bootstrap, also introduced here, up to secondorder distribution theory. Finally, we introduce LDWBaided KolmogorovSmirnov tests for local Gaussianity as well as local vonMises statistics, with and without bootstrap inference, and establish their asymptotic validity using the secondorder distribution theory. The finite sample performance of CLT and LDWBaided local Gaussianity tests are assessed in a simulation study as well as two empirical applications. Whereas the CLT test is oversized, even in large samples, the size of the LDWB tests are accurate, even in small samples. The empirical analysis verifies this pattern, in addition to providing new insights about the distributional properties of equity indices, commodities, exchange rates and popular macro finance variables. 
Keywords:  Bootstrap inference, Highfrequency data, It^o semimartingales, KolmogorovSmirnov test, Stable processes, vonMises statistics 
JEL:  C12 C14 C15 G1 
Date:  2018–04–26 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201816&r=all 
By:  Bormann, Carsten; Schienle, Melanie 
Abstract:  An accurate assessment of tail inequalities and tail asymmetries of financial returns is key for risk management and portfolio allocation. We propose a new test procedure for detecting the full extent of such structural differences in the dependence of bivariate extreme returns. We decompose the testing problem into piecewise multiple comparisons of Cramérvon Mises distances of tail copulas. In this way, tail regions that cause differences in extreme dependence can be located and consequently be targeted by financial strategies. We derive the asymptotic properties of the test and provide a bootstrap approximation for finite samples. Moreover, we account for the multiplicity of the piecewise tail copula comparisons by adjusting individual pvalues according to multiple testing techniques. Monte Carlo simulations demonstrate the test's superior finitesample properties for common financial tail risk models, both in the i.i.d. and the sequentially dependent case. During the last 90 years in US stock markets, our test detects up to 20% more tail asymmetries than competing tests. This can be attributed to the presence of nonstandard tail dependence structures. We also find evidence for diminishing tail asymmetries during every major financial crisis  except for the 200709 crisis  reflecting a riskreturn tradeoff for extreme returns. 
Keywords:  tail dependence,tail copulas,tail asymmetry,tail inequality,extreme values,multiple testing 
JEL:  C12 C53 C58 
Date:  2019 
URL:  http://d.repec.org/n?u=RePEc:zbw:kitwps:122&r=all 
By:  Bruce E. Hansen; Seojeong Lee 
Abstract:  We provide a complete asymptotic distribution theory for clustered data with a large number of independent groups, generalizing the classic laws of large numbers, uniform laws, central limit theory, and clustered covariance matrix estimation. Our theory allows for clustered observations with heterogeneous and unbounded cluster sizes. Our conditions cleanly nest the classical results for i.n.i.d. observations, in the sense that our conditions specialize to the classical conditions under independent sampling. We use this theory to develop a full asymptotic distribution theory for estimation based on linear leastsquares, 2SLS, nonlinear MLE, and nonlinear GMM. 
Date:  2019–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1902.01497&r=all 
By:  Zervopoulos, Panagiotis; Emrouznejad, Ali; Sklavos, Sokratis 
Abstract:  The validity of data envelopment analysis (DEA) efficiency estimators depends on the robustness of the production frontier to measurement errors, specification errors and the dimension of the inputoutput space. It has been proven that DEA estimators, within the interval (0, 1], are overestimated when finite samples are used while asymptotically this bias reduces to zero. The nonparametric literature dealing with bias correction of efficiencies solely refers to estimators that do not exceed one. We prove that efficiency estimators, both lower and higher than one, are biased. A Bayesian DEA method is developed to correct bias of efficiency estimators. This is a twostage procedure of superefficiency DEA followed by a Bayesian approach relying on consistent efficiency estimators. This method is applicable to ‘small’ and ‘medium’ samples. The new Bayesian DEA method is applied to two data sets of 50 and 100 E.U. banks. The mean square error, root mean square error and mean absolute error of the new method reduce as the sample size increases. 
Keywords:  Data envelopment analysis Superefficiency Bayesian methods Statistical inference Banking 
JEL:  C11 C18 C44 M11 
Date:  2019–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:91886&r=all 
By:  Mathias Kloss; Thomas Kirschstein; Steffen Liebscher; Martin Petrick 
Abstract:  Sources of bias in empirical studies can be separated in those coming from the modelling domain (e.g. multicollinearity) and those coming from outliers. We propose a twostep approach to counter both issues. First, by decontaminating data with a multivariate outlier detection procedure and second, by consistently estimating parameters of the production function. We apply this approach to a panel of German field crop data. Results show that the decontamination procedure detects multivariate outliers. In general, multivariate outlier control delivers more reasonable results with a higher precision in the estimation of some parameters and seems to mitigate the effects of multicollinearity. 
Date:  2019–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1902.00678&r=all 
By:  Yagi, Daisuke; Chen, Yining; Johnson, Andrew L.; Kuosmanen, Timo 
Abstract:  In this paper we examine a novel way of imposing shape constraints on a local polynomial kernel estimator. The proposed approach is referred to as Shape Constrained Kernelweighted Least Squares (SCKLS). We prove uniform consistency of the SCKLS estimator with monotonicity and convexity/concavity constraints and establish its convergence rate. In addition, we propose a test to validate whether shape constraints are correctly specified. The competitiveness of SCKLS is shown in a comprehensive simulation study. Finally, we analyze Chilean manufacturing data using the SCKLS estimator and quantify production in the plastics and wood industries. The results show that exporting firms have significantly higher productivity 
Keywords:  Local Polynomials; Kernel Estimation; Multivariate Convex Regression; Nonparametric regression; Shape Constraints 
JEL:  C1 
Date:  2018–01–23 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:86556&r=all 
By:  Huang, Wenxin (Antai College of Economics and Management, Shanghai Jiao Tong University); Jin, Sainan (School of Economics, Singapore Management University); Su, Liangjun (School of Economics, Singapore Management University) 
Abstract:  We consider a panel cointegration model with latent group structures that allows for heterogeneous longrun relationships across groups. We extend Su, Shi, and Phillips’ (2016) classifierLasso (CLasso) method to the nonstationary panels and allow for the presence of endogeneity in both the stationary and nonstationary regressors in the model. In addition, we allow the dimension of the stationary regressors to diverge with the sample size. We show that we can identify the individuals’ group membership and estimate the groupspecific longrun cointegrated relationships simultaneously. We demonstrate the desirable property of uniform classification consistency and the oracle properties of both the CLasso estimators and their postLasso versions. The special case of dynamic penalized least squares is also studied. Simulations show superb finite sample performance in both classification and estimation. In an empirical application, we study the potential heterogeneous behavior in testing the validity of longrun purchasing power parity (PPP) hypothesis in the postBretton Woods period from 19752014 covering 99 countries. We identify two groups in the period 19751998 and three ones in the period 19992014. The results confirm that at least some countries favor the longrun PPP hypothesis in the postBretton Woods period. 
Keywords:  Classifier Lasso; Dynamic OLS; Heterogeneity; Latent group structure; Nonstationarity; Penalized least squares; Panel cointegration; Purchasing power parity 
JEL:  C13 C33 C51 F31 
Date:  2018–11–20 
URL:  http://d.repec.org/n?u=RePEc:ris:smuesw:2019_003&r=all 
By:  Millimet, Daniel L. (Southern Methodist University); Li, Hao (Nanjing Audit University); Roychowdhury, Punarjit (Indian Institute of Management) 
Abstract:  The economic mobility of individuals and households is of fundamental interest. While many measures of economic mobility exist, reliance on transition matrices remains pervasive due to simplicity and ease of interpretation. However, estimation of transition matrices is complicated by the wellacknowledged problem of measurement error in selfreported and even administrative data. Existing methods of addressing measurement error are complex, rely on numerous strong assumptions, and often require data from more than two periods. In this paper, we investigate what can be learned about economic mobility as measured via transition matrices while formally accounting for measurement error in a reasonably trans parent manner. To do so, we develop a nonparametric partial identification approach to bound transition probabilities under various assumptions on the measurement error and mobility processes. This approach is applied to panel data from the United States to explore shortrun mobility before and after the Great Recession. 
Keywords:  partial identification, measurement error, mobility, transition matrices, poverty 
JEL:  C18 D31 I32 
Date:  2019–01 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp12085&r=all 
By:  Grossmann, Volker (University of Fribourg); Osikominu, Aderonke (University of Hohenheim) 
Abstract:  In absence of randomized controlled experiments, identification is often aimed via instrumental variable (IV) strategies, typically twostage least squares estimations. According to Bayes' rule, however, under a low ex ante probability that a hypothesis is true (e.g. that an excluded instrument is partially correlated with an endogenous regressor), the interpretation of the estimation results may be fundamentally flawed. This paper argues that rigorous theoretical reasoning is key to design credible identification strategies, aforemost finding candidates for valid instruments. We discuss prominent IV analyses from the macrodevelopment literature to illustrate the potential benefit of structurally derived IV approaches. 
Keywords:  Bayes' Rule, economic development, identification, instrumental variable estimation, macroeconomic theory 
JEL:  C10 C36 O11 
Date:  2019–01 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp12080&r=all 
By:  Schmidheiny, Kurt (University of Basel); Siegloch, Sebastian (University of Mannheim) 
Abstract:  We discuss important features and pitfalls of paneldata event study designs. We derive the following main results: First, event study designs and distributedlag models are numerically identical leading to the same parameter estimates after correct reparametrization. Second, binning of effect window endpoints allows identification of dynamic treatment effects even when no nevertreated units are present. Third, classic dummy variable event study designs can be naturally generalized to models that account for multiple events of different sign and intensity of the treatment, which are particularly interesting for research in labor economics and public finance. 
Keywords:  event study, distributedlag, applied microeconomics, credibility revolution 
JEL:  C23 C51 H00 J08 
Date:  2019–01 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp12079&r=all 
By:  Francisco (F.) Blasques (VU Amsterdam, The Netherlands); Marc Nientker (VU Amsterdam, The Netherlands) 
Abstract:  This paper introduces a new solution method for Dynamic Stochastic General Equilibrium (DSGE) models that produces non explosive paths. The proposed solution method is as fast as standard perturbation methods and can be easily implemented in existing software packages like Dynare as it is obtained directly as a transformation of existing perturbation solutions proposed by Judd and Guu (1997) and SchmittGrohe and Uribe (2004), among others. The transformed perturbation method shares the same advantageous function approximation properties as standard higher order perturbation methods and, in contrast to those methods, generates stable sample paths that are stationary, geometrically ergodic and absolutely regular. Additionally, moments are shown to be bounded. The method is an alternative to the pruning method as proposed in Kim et al. (2008). The advantages of our approach are that, unlike pruning, it does not need to sacrifice accuracy around the steady state by ignoring higher order effects and it delivers a policy function. Moreover, the newly proposed solution is always more accurate globally than standard perturbation methods. We demonstrate the superior accuracy of our method in a range of examples. 
Keywords:  Higherorder perturbation approximation; nonexplosive simulations; stochastic stability 
JEL:  C15 C63 E00 
Date:  2019–02–05 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20190012&r=all 
By:  Eric Beutner; Alexander Heinemann; Stephan Smeekes 
Abstract:  In this paper we propose a general framework to analyze prediction in time series models and show how a wide class of popular time series models satisfies this framework. We postulate a set of highlevel assumptions, and formally verify these assumptions for the aforementioned time series models. Our framework coincides with that of Beutner et al. (2019, arXiv:1710.00643) who establish the validity of conditional confidence intervals for predictions made in this framework. The current paper therefore complements the results in Beutner et al. (2019, arXiv:1710.00643) by providing practically relevant applications of their theory. 
Date:  2019–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1902.01622&r=all 
By:  Karlson Pfannschmidt; Pritha Gupta; Eyke H\"ullermeier 
Abstract:  We study the problem of learning choice functions, which play an important role in various domains of application, most notably in the field of economics. Formally, a choice function is a mapping from sets to sets: Given a set of choice alternatives as input, a choice function identifies a subset of most preferred elements. Learning choice functions from suitable training data comes with a number of challenges. For example, the sets provided as input and the subsets produced as output can be of any size. Moreover, since the order in which alternatives are presented is irrelevant, a choice function should be symmetric. Perhaps most importantly, choice functions are naturally contextdependent, in the sense that the preference in favor of an alternative may depend on what other options are available. We formalize the problem of learning choice functions and present two general approaches based on two representations of contextdependent utility functions. Both approaches are instantiated by means of appropriate neural network architectures, and their performance is demonstrated on suitable benchmark tasks. 
Date:  2019–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1901.10860&r=all 
By:  Cohen, Jeffrey P. (University of Connecticut); Coughlin, Cletus C. (Federal Reserve Bank of St. Louis); Zabel, Jeffrey (Tufts University) 
Abstract:  In this study, we develop and apply a new methodology for obtaining accurate and equitable property value assessments. This methodology adds a time dimension to the Geographically Weighted Regressions (GWR) framework, which we call TimeGeographically Weighted Regressions (TGWR). That is, when generating assessed values, we consider sales that are close in time and space to the designated unit. We think this is an important improvement of GWR since this increases the number of comparable sales that can be used to generate assessed values. Furthermore, it is likely that units that sold at an earlier time but are spatially near the designated unit are likely to be closer in value than units that are sold at a similar time but farther away geographically. This is because location is such an important determinant of house value. We apply this new methodology to sales data for residential properties in 50 municipalities in Connecticut for 19942013 and 145 municipalities in Massachusetts for 19872012. This allows us to compare results over a long time period and across municipalities in two states. We find that TGWR performs better than OLS with fixed effects and leads to less regressive assessed values than OLS. In many cases, TGWR performs better than GWR that ignores the time dimension. In at least one specification, several suburban and rural towns meet the IAAO Coefficient of Dispersion cutoffs for acceptable accuracy. 
Keywords:  geographically weighted regression; assessment; property value; coefficient of dispersion; pricerelated differential 
JEL:  C14 H71 R31 R51 
Date:  2019–01–30 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2019005&r=all 
By:  Leopoldo Catania (Aarhus University and CREATES); Tommaso Proietti (CEIS & DEF, University of Rome "Tor Vergata") 
Abstract:  The prediction of volatility is of primary importance for business applications in risk management, asset allocation and pricing of derivative instruments. This paper proposes a novel measurement model which takes into consideration the possibly timevarying interaction of realized volatility and asset returns, according to a bivariate model aiming at capturing the main stylised facts: (i) the long memory of the volatility process, (ii) the heavytailedness of the returns distribution, and (iii) the negative dependence of volatility and daily market returns. We assess the relevance of "volatility in volatility"and timevarying "leverage" effects in the outofsample forecasting performance of the model, and evaluate the density forecasts of the future level of market volatility. The empirical results illustrate that our specification can outperform the benchmark HARRV, both in terms of point and density forecasts. 
Keywords:  realized volatility, forecasting, leverage effect, volatility in volatility 
Date:  2019–02–06 
URL:  http://d.repec.org/n?u=RePEc:rtv:ceisrp:450&r=all 
By:  Murasawa, Yasutomo 
Abstract:  The consumption Euler equation implies that the output growth rate and the real interest rate are of the same order of integration; thus if the real interest rate is I(1), then so is the output growth rate with possible cointegration, and log output is I(2). This paper extends the multivariate BeveridgeNelson decomposition to such a case, and develops a Bayesian method to obtain error bands. The paper applies the method to US data to estimate the natural rates (or their permanent components) and gaps of output, inflation, interest, and unemployment jointly, and finds that allowing for cointegration gives much bigger estimates of all gaps. 
Keywords:  Natural rate, Output gap, Trendcycle decomposition, Trend inflation, Unit root, Vector error correction model (VECM) 
JEL:  C11 C32 C82 E32 
Date:  2019–02–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:91979&r=all 