nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒02‒11
28 papers chosen by
Sune Karlsson
Örebro universitet

  1. Consistent Inference for Predictive Regressions in Persistent VAR Economies By Torben G. Andersen; Rasmus T. Varneskov
  2. Option Panels in Pure-Jump Settings By Torben G. Andersen; Nicola Fusari; Viktor Todorov; Rasmus T. Varneskov
  3. Unified Inference for Nonlinear Factor Models from Panels with Fixed and Large Time Span By Torben G. Andersen; Nicola Fusari; Viktor Todorov; Rasmus T. Varneskov
  4. A Sieve-SMM Estimator for Dynamic Models By Jean-Jacques Forneron
  5. Determination of vector error correction models in high dimensions By Liang, Chong; Schienle, Melanie
  6. Persistence Heterogeneity Testing in Panels with Interactive Fixed Effects By Yunus Emre Ergemen; Carlos Velasco
  7. A Parametric Factor Model of the Term Structure of Mortality By Niels Haldrup; Carsten P. T. Rosenskjold
  8. On Factor Models with Random Missing: EM Estimation, Inference, and Cross Validation By Su, Liangjun; Miao, Ke; Jin, Sainan
  9. Testing for an omitted multiplicative long-term component in GARCH models By Conrad, Christian; Schienle, Melanie
  10. The Shifting Seasonal Mean Autoregressive Model and Seasonality in the Central England Monthly Temperature Series, 1772-2016 By Changli He; Jian Kang; Timo Teräsvirta; Shuhua Zhang
  11. Time-Varying Periodicity in Intraday Volatility By Torben G. Andersen; Martin Thyrsgaard; Viktor Todorov
  12. A Bootstrap Test for the Existence of Moments for GARCH Processes By Alexander Heinemann
  13. Inference for Local Distributions at High Sampling Frequencies: A Bootstrap Approach By Ulrich Hounyo; Rasmus T. Varneskov
  14. Detecting structural differences in tail dependence of financial time series By Bormann, Carsten; Schienle, Melanie
  15. Asymptotic Theory for Clustered Samples By Bruce E. Hansen; Seojeong Lee
  16. A Bayesian approach for correcting bias of data envelopment analysis estimators By Zervopoulos, Panagiotis; Emrouznejad, Ali; Sklavos, Sokratis
  17. Robust Productivity Analysis: An application to German FADN data By Mathias Kloss; Thomas Kirschstein; Steffen Liebscher; Martin Petrick
  18. Shape constrained kernel-weighted least squares: Estimating production functions for Chilean manufacturing industries By Yagi, Daisuke; Chen, Yining; Johnson, Andrew L.; Kuosmanen, Timo
  19. Identifying Latent Grouped Patterns in Cointegrated Panels By Huang, Wenxin; Jin, Sainan; Su, Liangjun
  20. Partial Identification of Economic Mobility: With an Application to the United States By Millimet, Daniel L.; Li, Hao; Roychowdhury, Punarjit
  21. Let the Data Speak? On the Importance of Theory-Based Instrumental Variable Estimations By Grossmann, Volker; Osikominu, Aderonke
  22. On Event Study Designs and Distributed-Lag Models: Equivalence, Generalization and Practical Implications By Schmidheiny, Kurt; Siegloch, Sebastian
  23. Transformed Perturbation Solutions for Dynamic Stochastic General Equilibrium Models By Francisco (F.) Blasques; Marc Nientker
  24. A General Framework for Prediction in Time Series Models By Eric Beutner; Alexander Heinemann; Stephan Smeekes
  25. Learning Choice Functions By Karlson Pfannschmidt; Pritha Gupta; Eyke H\"ullermeier
  26. Time-Geographically Weighted Regressions and Residential Property Value Assessment By Cohen, Jeffrey P.; Coughlin, Cletus C.; Zabel, Jeffrey
  27. Forecasting Volatility with Time-Varying Leverage and Volatility of Volatility Effects By Leopoldo Catania; Tommaso Proietti
  28. Bayesian multivariate Beveridge--Nelson decomposition of I(1) and I(2) series with cointegration By Murasawa, Yasutomo

  1. By: Torben G. Andersen (Northwestern University and CREATES); Rasmus T. Varneskov (Copenhagen Business School and CREATES)
    Abstract: This paper studies the properties of standard predictive regressions in model economies, characterized through persistent vector autoregressive dynamics for the state variables and the associated series of interest. In particular, we consider a setting where all, or a subset, of the variables may be fractionally integrated, and note that this induces a spurious regression problem. We then propose a new inference and testing procedure - the local spectrum (LCM) approach - for the joint significance of the regressors, which is robust against the variables having different integration orders. The LCM procedure is based on (semi-)parametric fractional-filtering and band spectrum regression using a suitably selected set of frequency ordinates. We establish the asymptotic properties and explain how they differ from and extend existing procedures. Using these new inference and testing techniques, we explore the implications of assuming VAR dynamics in predictive regressions for the realized return variation. Standard least squares predictive regressions indicate that popular financial and macroeconomic variables carry valuable information about return volatility. In contrast, we find no significant evidence using our robust LCM procedure, indicating that prior conclusions may be premature. In fact, if anything, our results suggest the reverse causality, i.e., rising volatility predates adverse innovations to key macroeconomic variables. Simulations are employed to illustrate the relevance of the theoretical arguments for finite-sample inference.
    Keywords: Endogeneity Bias, Fractional Integration, Frequency Domain Inference, Hypothesis Testing, Spurious Inference, Stochastic Volatility, VAR Models
    JEL: C13 C14 C32 C52 C53 G12
    Date: 2018–02–27
  2. By: Torben G. Andersen (Northwestern University and CREATES); Nicola Fusari (The Johns Hopkins University Carey Business School); Viktor Todorov (Northwestern University); Rasmus T. Varneskov (Northwestern University and CREATES)
    Abstract: We develop parametric inference procedures for large panels of noisy option data in the setting where the underlying process is of pure-jump type, i.e., evolve only through a sequence of jumps. The panel consists of options written on the underlying asset with a (different) set of strikes and maturities available across observation times. We consider the asymptotic setting in which the cross-sectional dimension of the panel increases to infinity while its time span remains fixed. The information set is further augmented with high-frequency data on the underlying asset. Given a parametric specification for the risk-neutral asset return dynamics, the option prices are nonlinear functions of a time-invariant parameter vector and a time-varying latent state vector (or factors). Furthermore, no-arbitrage restrictions impose a direct link between some of the quantities that may be identified from the return and option data. These include the so-called jump activity index as well as the time-varying jump intensity. We propose penalized least squares estimation in which we minimize L_2 distance between observed and model-implied options and further penalize for the deviation of model-implied quantities from their model-free counterparts measured via the highfrequency returns. We derive the joint asymptotic distribution of the parameters, factor realizations and high-frequency measures, which is mixed Gaussian. The different components of the parameter and state vector can exhibit different rates of convergence depending on the relative informativeness of the high-frequency return data and the option panel.
    Keywords: Inference, Jump Activity, Large Data Sets, Nonlinear Factor Model, Options, Panel Data, Stable Convergence, Stochastic Jump Intensity
    JEL: C51 C52 G12
    Date: 2018–01–10
  3. By: Torben G. Andersen (Northwestern University and CREATES); Nicola Fusari (The Johns Hopkins University Carey Business School); Viktor Todorov (Northwestern University); Rasmus T. Varneskov (Northwestern University and CREATES)
    Abstract: We provide unifying inference theory for parametric nonlinear factor models based on a panel of noisy observations. The panel has a large cross-section and a time span that may be either small or large. Moreover, we incorporate an additional source of information provided by noisy observations on some known functions of the factor realizations. The estimation is carried out via penalized least squares, i.e., by minimizing the L_2 distance between observations from the panel and their model-implied counterparts, augmented by a penalty for the deviation of the extracted factors from the noisy signals for them. When the time dimension is fixed, the limit distribution of the parameter vector is mixed Gaussian with conditional variance depending on the path of the factor realizations. On the other hand, when the time span is large, the convergence rate is faster and the limit distribution is Gaussian with a constant variance. In this case, however, we incur an incidental parameter problem since, at each point in time, we need to recover the concurrent factor realizations. This leads to an asymptotic bias that is absent in the setting with a fixed time span. In either scenario, the limit distribution of the estimates for the factor realizations is mixed Gaussian, but is related to the limiting distribution of the parameter vector only in the scenario with a fixed time horizon. Although the limit behavior is very different for the small versus large time span, we develop a feasible inference theory that applies, without modification, in either case. Hence, the user need not take a stand on the relative size of the time dimension of the panel. Similarly, we propose a time-varying data-driven weighting of the penalty in the objective function, which enhances effciency by adapting to the relative quality of the signal for the factor realizations.
    Keywords: Asymptotic Bias, Incidental Parameter Problem, Inference, Large Data Sets, Nonlinear Factor Model, Options, Panel Data, Stable Convergence, Stochastic Volatility
    JEL: C51 C52 G12
    Date: 2018–01–10
  4. By: Jean-Jacques Forneron
    Abstract: This paper proposes a Sieve Simulated Method of Moments (Sieve-SMM) estimator for the parameters and the distribution of the shocks in nonlinear dynamic models where the likelihood and the moments are not tractable. An important concern with SMM, which matches sample with simulated moments, is that a parametric distribution is required but economic quantities that depend on this distribution, such as welfare and asset-prices, can be sensitive to misspecification. The Sieve-SMM estimator addresses this issue by flexibly approximating the distribution of the shocks with a Gaussian and tails mixture sieve. The asymptotic framework provides consistency, rate of convergence and asymptotic normality results, extending existing sieve estimation theory to a new framework with more general dynamics and latent variables. Monte-Carlo simulations illustrate the finite sample properties of the estimator. Two empirical applications highlight the importance of the distribution of the shocks for estimates and counterfactuals.
    Date: 2019–02
  5. By: Liang, Chong; Schienle, Melanie
    Abstract: We provide a shrinkage type methodology which allows for simultaneous model selection and estimation of vector error correction models (VECM) when the dimension is large and can increase with sample size. Model determination is treated as a joint selection problem of cointegrating rank and autoregressive lags under respective practically valid sparsity assumptions. We show consistency of the selection mechanism by the resulting Lasso-VECM estimator under very general assumptions on dimension, rank and error terms. Moreover, with computational complexity of a linear programming problem only, the procedure remains computationally tractable in high dimensions. We demonstrate the effectiveness of the proposed approach by a simulation study and an empirical application to recent CDS data after the financial crisis.
    Keywords: High-dimensional time series,VECM,Cointegration rank and lag selection,Lasso,Credit Default Swap
    JEL: C32 C52
    Date: 2019
  6. By: Yunus Emre Ergemen (Aarhus University and CREATES); Carlos Velasco (Universidad Carlos III de Madrid)
    Abstract: We consider large N,T panel data models with fixed effects, a common factor allowing for cross-section dependence, and persistent data and shocks, which are assumed fractionally integrated. In a basic setup, the main interest is on the fractional parameter of the idiosyncratic component, which is estimated in first differences after factor removal by projection on the cross-section average. The pooled conditional-sum-of-squares estimate is root-NT consistent but the normal asymptotic distribution might not be centered, requiring the time series dimension to grow faster than the cross-section size for correction. We develop tests of homogeneity of dynamics, including the degree of integration, that have no trivial power under local departures from the null hypothesis of a non-negligible fraction of cross-section units. A simulation study shows that our estimates and test have good performance even in moderately small panels.
    Keywords: Fractional integration, panel data, factor models, long memory, homogeneity test
    JEL: C22 C23
    Date: 2018–03–12
  7. By: Niels Haldrup (Aarhus University and CREATES); Carsten P. T. Rosenskjold (Aarhus University and CREATES)
    Abstract: The prototypical Lee-Carter mortality model is characterized by a single common time factor that loads differently across age groups. In this paper we propose a factor model for the term structure of mortality where multiple factors are designed to influence the age groups differently via parametric loading functions. We identify four different factors: a factor common for all age groups, factors for infant and adult mortality, and a factor for the "accident hump" that primarily affects mortality of relatively young adults and late teenagers. Since the factors are identified via restrictions on the loading functions, the factors are not designed to be orthogonal but can be dependent and can possibly cointegrate when the factors have unit roots. We suggest two estimation procedures similar to the estimation of the dynamic Nelson-Siegel term structure model. First, a two-step nonlinear least squares procedure based on cross-section regressions together with a separate model to estimate the dynamics of the factors. Second, we suggest a fully specified model estimated by maximum likelihood via the Kalman filter recursions after the model is put on state space form. We demonstrate the methodology for US and French mortality data. We find that the model provides a good fitt of the relevant factors and in a forecast comparison with a range of benchmark models it is found that, especially for longer horizons, variants of the parametric factor model have excellent forecast performance.
    Keywords: Mortality Forecasting, Term Structure of Mortality, Factor Modelling, Cointegration
    JEL: C1 C22 J10 J11 G22
    Date: 2018–01–12
  8. By: Su, Liangjun (School of Economics, Singapore Management University); Miao, Ke (School of Economics, Singapore Management University); Jin, Sainan (School of Economics, Singapore Management University)
    Abstract: We consider the estimation and inference in approximate factor models with random missing values. We show that with the low rank structure of the common component, we can estimate the factors and factor loadings consistently with the missing values replaced by zeros. We establish the asymptotic distributions of the resulting estimators and those based on the EM algorithm. We also propose a cross validation-based method to determine the number of factors in factor models with or without missing values and justify its consistency. Simulations demonstrate that our cross validation method is robust to fat tails in the error distribution and significantly outperforms some existing popular methods in terms of correct percentage in determining the number of factors. An application to the factor-augmented regression models shows that a proper treatment of the missing values can improve the out-of-sample forecast of some macroeconomic variables.
    Keywords: Cross-validation; Expectation-Maximization (EM) algorithm; Factor models; Matrix completion; Missing at random; Principal component analysis; Singular value decomposition
    JEL: C23 C33 C38
    Date: 2019–01–15
  9. By: Conrad, Christian; Schienle, Melanie
    Abstract: We consider the problem of testing for an omitted multiplicative long-term component in GARCH-type models. Under the alternative there is a two-component model with a short-term GARCH component that fluctuates around a smoothly time-varying long-term component which is driven by the dynamics of an explanatory variable. We suggest a Lagrange Multiplier statistic for testing the null hypothesis that the variable has no explanatory power. We derive the asymptotic theory for our test statistic and investigate its finite sample properties by Monte-Carlo simulation. Our test also covers the mixed-frequency case in which the returns are observed at a higher frequency than the explanatory variable. The usefulness of our procedure is illustrated by empirical applications to S&P 500 return data.
    Keywords: GARCH-MIDAS,LM test,Long-Term Volatility,Mixed-Frequency Data,Volatility Component Models
    JEL: C53 C58 E32 G12
    Date: 2019
  10. By: Changli He (Tianjin University of Finance and Economics); Jian Kang (Tianjin University of Finance and Economics); Timo Teräsvirta (CREATES and Aarhus University, C.A.S.E, Humboldt-Universität zu Berlin); Shuhua Zhang (Tianjin University of Finance and Economics)
    Abstract: In this paper we introduce an autoregressive model with seasonal dummy variables in which coefficients of seasonal dummies vary smoothly and deterministically over time. The error variance of the model is seasonally heteroskedastic and multiplicatively decomposed, the decomposition being similar to that in well known ARCH and GARCH models. This variance is also allowed to be smoothly and deterministically time-varying. Under regularity conditions, consistency and asymptotic normality of the maximum likelihood estimators of parameters of this model is proved. A test of constancy of the seasonal coefficients is derived. The test is generalised to specifying the parametric structure of the model. A test of constancy over time of the heteroskedastic error variance is presented. The purpose of building this model is to use it for describing changing seasonality in the well-known monthly central England temperature series. More specifically, the idea is to find out in which way and by how much the monthly temperatures are varying over time during the period of more than 240 years, if they do. Misspecification tests are applied to the estimated model and the findings discussed.
    Keywords: global warming, nonlinear time series, changing seasonality, smooth transition, testing constancy
    JEL: C22 C51 C52 Q54
    Date: 2018–04–25
  11. By: Torben G. Andersen (Northwestern University, NBER, and CREATES); Martin Thyrsgaard (Aarhus University and CREATES); Viktor Todorov (Northwestern University)
    Abstract: We develop a nonparametric test for deciding whether return volatility exhibits time-varying intraday periodicity using a long time-series of high-frequency data. Our null hypothesis, commonly adopted in work on volatility modeling, is that volatility follows a stationary process combined with a constant time-of-day periodic component. We first construct time-of-day volatility estimates and studentize the high-frequency returns with these periodic components. If the intraday volatility periodicity is invariant over time, then the distribution of the studentized returns should be identical across the trading day. Consequently, the test is based on comparing the empirical characteristic function of the studentized returns across the trading day. The limit distribution of the test depends on the error in recovering volatility from discrete return data and the empirical process error associated with estimating volatility moments through their sample counterparts. Critical values are computed via easy-to-implement simulation. In an empirical application to S&P 500 index returns, we find strong evidence for variation in the intraday volatility pattern driven in part by the current level of volatility. When market volatility is elevated, the period preceding the market close constitutes a significantly higher fraction of the total daily integrated volatility than is the case during low market volatility regimes.
    Keywords: high-frequency data, periodicity, semimartingale, specification test, stochastic volatility
    JEL: C51 C52 G12
    Date: 2018–01–12
  12. By: Alexander Heinemann
    Abstract: This paper studies the joint inference on conditional volatility parameters and the innovation moments by means of bootstrap to test for the existence of moments for GARCH(p,q) processes. We propose a residual bootstrap to mimic the joint distribution of the quasi-maximum likelihood estimators and the empirical moments of the residuals and also prove its validity. A bootstrap-based test for the existence of moments is proposed, which provides asymptotically correctly-sized tests without losing its consistency property. It is simple to implement and extends to other GARCH-type settings. A simulation study demonstrates the test's size and power properties in finite samples and an empirical application illustrates the testing approach.
    Date: 2019–02
  13. By: Ulrich Hounyo (University at Albany - State University of New York and CREATES); Rasmus T. Varneskov (Copenhagen Business School and CREATES)
    Abstract: We study inference for the local innovations of It^o semimartingales. Specifically, we construct a resampling procedure for the empirical CDF of high-frequency innovations that have been standardized using a nonparametric estimate of its stochastic scale (volatility) and truncated to rid the effect of "large" jumps. Our locally dependent wild bootstrap (LDWB) accommodate issues related to the stochastic scale and jumps as well as account for a special block-wise dependence structure induced by sampling errors. We show that the LDWB replicates first and second-order limit theory from the usual empirical process and the stochastic scale estimate, respectively, as well as an asymptotic bias. Moreover, we design the LDWB sufficiently general to establish asymptotic equivalence between it and and a nonparametric local block bootstrap, also introduced here, up to second-order distribution theory. Finally, we introduce LDWB-aided Kolmogorov-Smirnov tests for local Gaussianity as well as local von-Mises statistics, with and without bootstrap inference, and establish their asymptotic validity using the second-order distribution theory. The finite sample performance of CLT and LDWB-aided local Gaussianity tests are assessed in a simulation study as well as two empirical applications. Whereas the CLT test is oversized, even in large samples, the size of the LDWB tests are accurate, even in small samples. The empirical analysis verifies this pattern, in addition to providing new insights about the distributional properties of equity indices, commodities, exchange rates and popular macro finance variables.
    Keywords: Bootstrap inference, High-frequency data, It^o semimartingales, Kolmogorov-Smirnov test, Stable processes, von-Mises statistics
    JEL: C12 C14 C15 G1
    Date: 2018–04–26
  14. By: Bormann, Carsten; Schienle, Melanie
    Abstract: An accurate assessment of tail inequalities and tail asymmetries of financial returns is key for risk management and portfolio allocation. We propose a new test procedure for detecting the full extent of such structural differences in the dependence of bivariate extreme returns. We decompose the testing problem into piecewise multiple comparisons of Cramér-von Mises distances of tail copulas. In this way, tail regions that cause differences in extreme dependence can be located and consequently be targeted by financial strategies. We derive the asymptotic properties of the test and provide a bootstrap approximation for finite samples. Moreover, we account for the multiplicity of the piecewise tail copula comparisons by adjusting individual p-values according to multiple testing techniques. Monte Carlo simulations demonstrate the test's superior finite-sample properties for common financial tail risk models, both in the i.i.d. and the sequentially dependent case. During the last 90 years in US stock markets, our test detects up to 20% more tail asymmetries than competing tests. This can be attributed to the presence of non-standard tail dependence structures. We also find evidence for diminishing tail asymmetries during every major financial crisis - except for the 2007-09 crisis - reflecting a risk-return trade-off for extreme returns.
    Keywords: tail dependence,tail copulas,tail asymmetry,tail inequality,extreme values,multiple testing
    JEL: C12 C53 C58
    Date: 2019
  15. By: Bruce E. Hansen; Seojeong Lee
    Abstract: We provide a complete asymptotic distribution theory for clustered data with a large number of independent groups, generalizing the classic laws of large numbers, uniform laws, central limit theory, and clustered covariance matrix estimation. Our theory allows for clustered observations with heterogeneous and unbounded cluster sizes. Our conditions cleanly nest the classical results for i.n.i.d. observations, in the sense that our conditions specialize to the classical conditions under independent sampling. We use this theory to develop a full asymptotic distribution theory for estimation based on linear least-squares, 2SLS, nonlinear MLE, and nonlinear GMM.
    Date: 2019–02
  16. By: Zervopoulos, Panagiotis; Emrouznejad, Ali; Sklavos, Sokratis
    Abstract: The validity of data envelopment analysis (DEA) efficiency estimators depends on the robustness of the production frontier to measurement errors, specification errors and the dimension of the input-output space. It has been proven that DEA estimators, within the interval (0, 1], are overestimated when finite samples are used while asymptotically this bias reduces to zero. The non-parametric literature dealing with bias correction of efficiencies solely refers to estimators that do not exceed one. We prove that efficiency estimators, both lower and higher than one, are biased. A Bayesian DEA method is developed to correct bias of efficiency estimators. This is a two-stage procedure of super-efficiency DEA followed by a Bayesian approach relying on consistent efficiency estimators. This method is applicable to ‘small’ and ‘medium’ samples. The new Bayesian DEA method is applied to two data sets of 50 and 100 E.U. banks. The mean square error, root mean square error and mean absolute error of the new method reduce as the sample size increases.
    Keywords: Data envelopment analysis Super-efficiency Bayesian methods Statistical inference Banking
    JEL: C11 C18 C44 M11
    Date: 2019–01
  17. By: Mathias Kloss; Thomas Kirschstein; Steffen Liebscher; Martin Petrick
    Abstract: Sources of bias in empirical studies can be separated in those coming from the modelling domain (e.g. multicollinearity) and those coming from outliers. We propose a two-step approach to counter both issues. First, by decontaminating data with a multivariate outlier detection procedure and second, by consistently estimating parameters of the production function. We apply this approach to a panel of German field crop data. Results show that the decontamination procedure detects multivariate outliers. In general, multivariate outlier control delivers more reasonable results with a higher precision in the estimation of some parameters and seems to mitigate the effects of multicollinearity.
    Date: 2019–02
  18. By: Yagi, Daisuke; Chen, Yining; Johnson, Andrew L.; Kuosmanen, Timo
    Abstract: In this paper we examine a novel way of imposing shape constraints on a local polynomial kernel estimator. The proposed approach is referred to as Shape Constrained Kernel-weighted Least Squares (SCKLS). We prove uniform consistency of the SCKLS estimator with monotonicity and convexity/concavity constraints and establish its convergence rate. In addition, we propose a test to validate whether shape constraints are correctly specified. The competitiveness of SCKLS is shown in a comprehensive simulation study. Finally, we analyze Chilean manufacturing data using the SCKLS estimator and quantify production in the plastics and wood industries. The results show that exporting firms have significantly higher productivity
    Keywords: Local Polynomials; Kernel Estimation; Multivariate Convex Regression; Nonparametric regression; Shape Constraints
    JEL: C1
    Date: 2018–01–23
  19. By: Huang, Wenxin (Antai College of Economics and Management, Shanghai Jiao Tong University); Jin, Sainan (School of Economics, Singapore Management University); Su, Liangjun (School of Economics, Singapore Management University)
    Abstract: We consider a panel cointegration model with latent group structures that allows for heterogeneous long-run relationships across groups. We extend Su, Shi, and Phillips’ (2016) classifier-Lasso (C-Lasso) method to the nonstationary panels and allow for the presence of endogeneity in both the stationary and nonstationary regressors in the model. In addition, we allow the dimension of the stationary regressors to diverge with the sample size. We show that we can identify the individuals’ group membership and estimate the group-specific long-run cointegrated relationships simultaneously. We demonstrate the desirable property of uniform classification consistency and the oracle properties of both the C-Lasso estimators and their post-Lasso versions. The special case of dynamic penalized least squares is also studied. Simulations show superb finite sample performance in both classification and estimation. In an empirical application, we study the potential heterogeneous behavior in testing the validity of long-run purchasing power parity (PPP) hypothesis in the post-Bretton Woods period from 1975-2014 covering 99 countries. We identify two groups in the period 1975-1998 and three ones in the period 1999-2014. The results confirm that at least some countries favor the long-run PPP hypothesis in the post-Bretton Woods period.
    Keywords: Classifier Lasso; Dynamic OLS; Heterogeneity; Latent group structure; Nonstationarity; Penalized least squares; Panel cointegration; Purchasing power parity
    JEL: C13 C33 C51 F31
    Date: 2018–11–20
  20. By: Millimet, Daniel L. (Southern Methodist University); Li, Hao (Nanjing Audit University); Roychowdhury, Punarjit (Indian Institute of Management)
    Abstract: The economic mobility of individuals and households is of fundamental interest. While many measures of economic mobility exist, reliance on transition matrices remains pervasive due to simplicity and ease of interpretation. However, estimation of transition matrices is complicated by the well-acknowledged problem of measurement error in self-reported and even administrative data. Existing methods of addressing measurement error are complex, rely on numerous strong assumptions, and often require data from more than two periods. In this paper, we investigate what can be learned about economic mobility as measured via transition matrices while formally accounting for measurement error in a reasonably trans- parent manner. To do so, we develop a nonparametric partial identification approach to bound transition probabilities under various assumptions on the measurement error and mobility processes. This approach is applied to panel data from the United States to explore short-run mobility before and after the Great Recession.
    Keywords: partial identification, measurement error, mobility, transition matrices, poverty
    JEL: C18 D31 I32
    Date: 2019–01
  21. By: Grossmann, Volker (University of Fribourg); Osikominu, Aderonke (University of Hohenheim)
    Abstract: In absence of randomized controlled experiments, identification is often aimed via instrumental variable (IV) strategies, typically two-stage least squares estimations. According to Bayes' rule, however, under a low ex ante probability that a hypothesis is true (e.g. that an excluded instrument is partially correlated with an endogenous regressor), the interpretation of the estimation results may be fundamentally flawed. This paper argues that rigorous theoretical reasoning is key to design credible identification strategies, aforemost finding candidates for valid instruments. We discuss prominent IV analyses from the macro-development literature to illustrate the potential benefit of structurally derived IV approaches.
    Keywords: Bayes' Rule, economic development, identification, instrumental variable estimation, macroeconomic theory
    JEL: C10 C36 O11
    Date: 2019–01
  22. By: Schmidheiny, Kurt (University of Basel); Siegloch, Sebastian (University of Mannheim)
    Abstract: We discuss important features and pitfalls of panel-data event study designs. We derive the following main results: First, event study designs and distributed-lag models are numerically identical leading to the same parameter estimates after correct reparametrization. Second, binning of effect window endpoints allows identification of dynamic treatment effects even when no never-treated units are present. Third, classic dummy variable event study designs can be naturally generalized to models that account for multiple events of different sign and intensity of the treatment, which are particularly interesting for research in labor economics and public finance.
    Keywords: event study, distributed-lag, applied microeconomics, credibility revolution
    JEL: C23 C51 H00 J08
    Date: 2019–01
  23. By: Francisco (F.) Blasques (VU Amsterdam, The Netherlands); Marc Nientker (VU Amsterdam, The Netherlands)
    Abstract: This paper introduces a new solution method for Dynamic Stochastic General Equilibrium (DSGE) models that produces non explosive paths. The proposed solution method is as fast as standard perturbation methods and can be easily implemented in existing software packages like Dynare as it is obtained directly as a transformation of existing perturbation solutions proposed by Judd and Guu (1997) and Schmitt-Grohe and Uribe (2004), among others. The transformed perturbation method shares the same advantageous function approximation properties as standard higher order perturbation methods and, in contrast to those methods, generates stable sample paths that are stationary, geometrically ergodic and absolutely regular. Additionally, moments are shown to be bounded. The method is an alternative to the pruning method as proposed in Kim et al. (2008). The advantages of our approach are that, unlike pruning, it does not need to sacrifice accuracy around the steady state by ignoring higher order effects and it delivers a policy function. Moreover, the newly proposed solution is always more accurate globally than standard perturbation methods. We demonstrate the superior accuracy of our method in a range of examples.
    Keywords: Higher-order perturbation approximation; non-explosive simulations; stochastic stability
    JEL: C15 C63 E00
    Date: 2019–02–05
  24. By: Eric Beutner; Alexander Heinemann; Stephan Smeekes
    Abstract: In this paper we propose a general framework to analyze prediction in time series models and show how a wide class of popular time series models satisfies this framework. We postulate a set of high-level assumptions, and formally verify these assumptions for the aforementioned time series models. Our framework coincides with that of Beutner et al. (2019, arXiv:1710.00643) who establish the validity of conditional confidence intervals for predictions made in this framework. The current paper therefore complements the results in Beutner et al. (2019, arXiv:1710.00643) by providing practically relevant applications of their theory.
    Date: 2019–02
  25. By: Karlson Pfannschmidt; Pritha Gupta; Eyke H\"ullermeier
    Abstract: We study the problem of learning choice functions, which play an important role in various domains of application, most notably in the field of economics. Formally, a choice function is a mapping from sets to sets: Given a set of choice alternatives as input, a choice function identifies a subset of most preferred elements. Learning choice functions from suitable training data comes with a number of challenges. For example, the sets provided as input and the subsets produced as output can be of any size. Moreover, since the order in which alternatives are presented is irrelevant, a choice function should be symmetric. Perhaps most importantly, choice functions are naturally context-dependent, in the sense that the preference in favor of an alternative may depend on what other options are available. We formalize the problem of learning choice functions and present two general approaches based on two representations of context-dependent utility functions. Both approaches are instantiated by means of appropriate neural network architectures, and their performance is demonstrated on suitable benchmark tasks.
    Date: 2019–01
  26. By: Cohen, Jeffrey P. (University of Connecticut); Coughlin, Cletus C. (Federal Reserve Bank of St. Louis); Zabel, Jeffrey (Tufts University)
    Abstract: In this study, we develop and apply a new methodology for obtaining accurate and equitable property value assessments. This methodology adds a time dimension to the Geographically Weighted Regressions (GWR) framework, which we call Time-Geographically Weighted Regressions (TGWR). That is, when generating assessed values, we consider sales that are close in time and space to the designated unit. We think this is an important improvement of GWR since this increases the number of comparable sales that can be used to generate assessed values. Furthermore, it is likely that units that sold at an earlier time but are spatially near the designated unit are likely to be closer in value than units that are sold at a similar time but farther away geographically. This is because location is such an important determinant of house value. We apply this new methodology to sales data for residential properties in 50 municipalities in Connecticut for 1994-2013 and 145 municipalities in Massachusetts for 1987-2012. This allows us to compare results over a long time period and across municipalities in two states. We find that TGWR performs better than OLS with fixed effects and leads to less regressive assessed values than OLS. In many cases, TGWR performs better than GWR that ignores the time dimension. In at least one specification, several suburban and rural towns meet the IAAO Coefficient of Dispersion cutoffs for acceptable accuracy.
    Keywords: geographically weighted regression; assessment; property value; coefficient of dispersion; price-related differential
    JEL: C14 H71 R31 R51
    Date: 2019–01–30
  27. By: Leopoldo Catania (Aarhus University and CREATES); Tommaso Proietti (CEIS & DEF, University of Rome "Tor Vergata")
    Abstract: The prediction of volatility is of primary importance for business applications in risk management, asset allocation and pricing of derivative instruments. This paper proposes a novel measurement model which takes into consideration the possibly time-varying interaction of realized volatility and asset returns, according to a bivariate model aiming at capturing the main stylised facts: (i) the long memory of the volatility process, (ii) the heavy-tailedness of the returns distribution, and (iii) the negative dependence of volatility and daily market returns. We assess the relevance of "volatility in volatility"and time-varying "leverage" effects in the out-of-sample forecasting performance of the model, and evaluate the density forecasts of the future level of market volatility. The empirical results illustrate that our specification can outperform the benchmark HAR-RV, both in terms of point and density forecasts.
    Keywords: realized volatility, forecasting, leverage effect, volatility in volatility
    Date: 2019–02–06
  28. By: Murasawa, Yasutomo
    Abstract: The consumption Euler equation implies that the output growth rate and the real interest rate are of the same order of integration; thus if the real interest rate is I(1), then so is the output growth rate with possible cointegration, and log output is I(2). This paper extends the multivariate Beveridge--Nelson decomposition to such a case, and develops a Bayesian method to obtain error bands. The paper applies the method to US data to estimate the natural rates (or their permanent components) and gaps of output, inflation, interest, and unemployment jointly, and finds that allowing for cointegration gives much bigger estimates of all gaps.
    Keywords: Natural rate, Output gap, Trend--cycle decomposition, Trend inflation, Unit root, Vector error correction model (VECM)
    JEL: C11 C32 C82 E32
    Date: 2019–02–05

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.