nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒10‒14
seventeen papers chosen by
Sune Karlsson
Örebro universitet

  1. Quasi Maximum Likelihood Estimation and Inference of Large Approximate Dynamic Factor Models via the EM algorithm By Matteo Barigozzi; Matteo Luciani
  2. Predictive, finite-sample model choice for time series under stationarity and non-stationarity By Kley, Tobias; Preuss, Philip; Fryzlewicz, Piotr
  3. Averaging estimation for instrumental variables quantile regression By Xin Liu
  4. Identification and Estimation of SVARMA models with Independent and Non-Gaussian Inputs By Bernd Funovits
  5. Conditional Sum of Squares Estimation of Multiple Frequency Long Memory Models By Beaumont, Paul; Smallwood, Aaron
  6. Global Robust Bayesian Analysis in Large Models By Paul Ho
  7. Benchmarking Global Optimizers By Antoine Arnoud; Fatih Guvenen; Tatjana Kleineberg
  8. Comparing Tests for Identification of Bubbles By Kristoffer Pons Bertelsen
  9. Comparing latent inequality with ordinal data By David M. Kaplan; Longhao Zhuo
  10. Robust Likelihood Ratio Tests for Incomplete Economic Models By Hiroaki Kaido; Yi Zhang
  11. Boosting High Dimensional Predictive Regressions with Time Varying Parameters By Kashif Yousuf; Serena Ng
  12. A theorem of Kalman and minimal state-space realization of Vector Autoregressive Models By Du Nguyen
  13. The Numerical Simulation of Quanto Option Prices Using Bayesian Statistical Methods By Lisha Lin; Yaqiong Li; Rui Gao; Jianhong Wu
  14. A 2-Dimensional Functional Central Limit Theorem for Non-stationary Dependent Random Fields By Michael C. Tseng
  15. Non-compliance in randomized control trials without exclusion restrictions By Masayuki Sawada
  16. Proximal Statistics: Asymptotic Normality By David Pacini
  17. Identifiability of Structural Singular Vector Autoregressive Models By Bernd Funovits; Alexander Braumann

  1. By: Matteo Barigozzi; Matteo Luciani
    Abstract: This paper studies Quasi Maximum Likelihood estimation of dynamic factor models for large panels of time series. Specifically, we consider the case in which the autocorrelation of the factors is explicitly accounted for and therefore the factor model has a state-space form. Estimation of the factors and their loadings is implemented by means of the Expectation Maximization algorithm, jointly with the Kalman smoother. We prove that, as both the dimension of the panel $n$ and the sample size $T$ diverge to infinity, the estimated loadings, factors, and common components are $\min(\sqrt n,\sqrt T)$-consistent and asymptotically normal. Although the model is estimated under the unrealistic constraint of independent idiosyncratic errors, this mis-specification does not affect consistency. Moreover, we give conditions under which the derived asymptotic distribution can still be used for inference even in case of mis-specifications. Our results are confirmed by a MonteCarlo simulation exercise where we compare the performance of our estimators with Principal Components.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.03821&r=all
  2. By: Kley, Tobias; Preuss, Philip; Fryzlewicz, Piotr
    Abstract: In statistical research there usually exists a choice between structurally simpler or more complex models. We argue that, even if a more complex, locally stationary time series model were true, then a simple, stationary time series model may be advantageous to work with under parameter uncertainty. We present a new model choice methodology, where one of two competing approaches is chosen based on its empirical, finite-sample performance with respect to prediction, in a manner that ensures interpretability. A rigorous, theoretical analysis of the procedure is provided. As an important side result we prove, for possibly diverging model order, that the localised Yule-Walker estimator is strongly, uniformly consistent under local stationarity. An R package, forecastSNSTS, is provided and used to apply the methodology to financial and meteorological data in empirical examples. We further provide an extensive simulation study and discuss when it is preferable to base forecasts on the more volatile time-varying estimates and when it is advantageous to forecast as if the data were from a stationary process, even though they might not be.
    Keywords: forecasting; Yule-Walker estimate; local stationarity; covariance stationarity; EP/L014246/1
    JEL: C1
    Date: 2019–10–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:101748&r=all
  3. By: Xin Liu (Department of Economics, University of Missouri)
    Abstract: This paper proposes averaging estimation methods to improve the finite-sample efficiency of the instrumental variables quantile regression (IVQR) estimation. First, I apply Cheng, Liao, Shi's (2019) averaging GMM framework to the IVQR model. I propose using the usual quantile regression moments for averaging to take advantage of cases when endogeneity is not too strong. I also propose using two-stage least squares slope moments to take advantage of cases when heterogeneity is not too strong. The empirical optimal weight formula of Cheng et al. (2019) helps optimize the bias-variance tradeoff, ensuring uniformly better (asymptotic) risk of the averaging estimator over the standard IVQR estimator under certain conditions. My implementation involves many computational considerations and builds on recent developments in the quantile literature. Second, I propose a bootstrap method that directly averages among IVQR, quantile regression, and two-stage least squares estimators. More specifically, I find the optimal weights in the bootstrap world and then apply the bootstrap-optimal weights to the original sample. The bootstrap method is simpler to compute and generally performs better in simulations, but it lacks the formal uniform dominance results of Cheng et al. (2019). Simulation results demonstrate that in the multiple-regressors/instruments case, both the GMM averaging and bootstrap estimators have uniformly smaller risk than the IVQR estimator across data-generating processes (DGPs) with all kinds of combinations of different endogeneity levels and heterogeneity levels. In DGPs with a single endogenous regressor and instrument, where averaging estimation is known to have least opportunity for improvement, the proposed averaging estimators outperform the IVQR estimator in some cases but not others.
    Keywords: model selection, model averaging
    JEL: C21 C26
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:1907&r=all
  4. By: Bernd Funovits
    Abstract: This paper analyzes identifiability properties of structural vector autoregressive moving average (SVARMA) models driven by independent and non-Gaussian shocks. It is well known, that SVARMA models driven by Gaussian errors are not identified without imposing further identifying restrictions on the parameters. Even in reduced form and assuming stability and invertibility, vector autoregressive moving average models are in general not identified without requiring certain parameter matrices to be non-singular. Independence and non-Gaussianity of the shocks is used to show that they are identified up to permutations and scalings. In this way, typically imposed identifying restrictions are made testable. Furthermore, we introduce a maximum-likelihood estimator of the non-Gaussian SVARMA model which is consistent and asymptotically normally distributed.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.04087&r=all
  5. By: Beaumont, Paul; Smallwood, Aaron
    Abstract: We review the multiple frequency Gegenbauer autoregressive moving average model, which is able to reproduce a wide range of autocorrelation functions. Extending the result of Chung (1996a), we propose the asymptotic distributions for a conditional sum of squares estimator of the model parameters. The parameters that determine the cycle lengths are asymptotically independent, converging at rate T for finite cycles. This result does not hold generally, most notably for the differencing parameters associated with the cycle lengths. Remaining parameters are typically not independent and converge at the standard rate of T1/2. We present simulation results to explore small sample properties of the estimator, which strongly support most distributional results while also highlighting areas that merit additional exploration. We demonstrate the applicability of the theory and estimator with an application to IBM trading volume.
    Keywords: k-factor Gegenbauer processes, Asymptotic distributions, ARFIMA, Conditional sum of squares
    JEL: C22 C40 C5 C58 G1 G12
    Date: 2019–09–29
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:96314&r=all
  6. By: Paul Ho (Princeton University)
    Abstract: This paper develops tools for global prior sensitivity analysis in large Bayesian models. Without imposing parametric restrictions, the framework provides bounds for a wide range of posterior statistics given any prior that is close to the original in relative entropy. The methodology also reveals parts of the prior that are important for the posterior statistics of interest. To implement these calculations in large models, we develop a sequential Monte Carlo algorithm and use approximations to the likelihood and statistic of interest. We use the framework to study error bands for the impulse response of output to a monetary policy shock in the New Keynesian model of Smets and Wouters (2007). The error bands depend asymmetrically on the prior through features of the likelihood that are hard to detect without this formal prior sensitivity analysis.
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:red:sed019:390&r=all
  7. By: Antoine Arnoud; Fatih Guvenen; Tatjana Kleineberg
    Abstract: We benchmark seven global optimization algorithms by comparing their performance on challenging multidimensional test functions as well as a method of simulated moments estimation of a panel data model of earnings dynamics. Five of the algorithms are taken from the popular NLopt open-source library: (i) Controlled Random Search with local mutation (CRS), (ii) Improved Stochastic Ranking Evolution Strategy (ISRES), (iii) Multi-Level Single-Linkage (MLSL) algorithm, (iv) Stochastic Global Optimization (StoGo), and (v) Evolutionary Strategy with Cauchy distribution (ESCH). The other two algorithms are versions of TikTak, which is a multistart global optimization algorithm used in some recent economic applications. For completeness, we add three popular local algorithms to the comparison—the Nelder-Mead downhill simplex algorithm, the Derivative-Free Non-linear Least Squares (DFNLS) algorithm, and a popular variant of the Davidon-Fletcher-Powell (DFPMIN) algorithm. To give a detailed comparison of algorithms, we use a set of benchmarking tools recently developed in the applied mathematics literature. We find that the success rate of many optimizers vary dramatically with the characteristics of each problem and the computational budget that is available. Overall, TikTak is the strongest performer on both the math test functions and the economic application. The next-best performing optimizers are StoGo and CRS for the test functions and MLSL for the economic application.
    JEL: C13 C15 C51 C53 C61 C63 D52 J31
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:26340&r=all
  8. By: Kristoffer Pons Bertelsen (Aarhus University and CREATES)
    Abstract: This paper compares the log periodic power law (LPPL) and the supremum augmented Dickey Fuller (supremum ADF) procedures considering bubble detection and time stamping capabilities in a thorough analysis based on simulated data. A generalized formulation of the LPPL procedure is derived and analysed demonstrating performance improvements.
    Keywords: Rational bubbles, explosive processes, log periodic power law, critical points theory
    JEL: C01 C02 C12 C13 C22 C52 C53 C58 C61 G01
    Date: 2019–10–11
    URL: http://d.repec.org/n?u=RePEc:aah:create:2019-16&r=all
  9. By: David M. Kaplan (Department of Economics, University of Missouri); Longhao Zhuo (Department of Economics, University of Missouri)
    Abstract: Using health as an example, we consider comparing two latent distributions when only ordinal data are available. Distinct from the literature, we assume a continuous latent distribution but not a parametric model. Primarily, we contribute (partial) identification results: given two known ordinal distributions, what can be learned about the relationship between the two corresponding latent distributions? Secondarily, we discuss Bayesian and frequentist inference on the relevant ordinal relationships, which are combinations of moment inequalities. Simulations and empirical examples illustrate our contributions.
    Keywords: health; nonparametric inference; partial identification; partial ordering; shape restrictions
    JEL: C25 D30 I14
    Date: 2018–12–03
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:1909&r=all
  10. By: Hiroaki Kaido; Yi Zhang
    Abstract: This study develops a framework for testing hypotheses on structural parameters in incomplete models. Such models make set-valued predictions and hence do not generally yield a unique likelihood function. The model structure, however, allows us to construct tests based on the least favorable pairs of likelihoods using the theory of Huber and Strassen (1973). We develop tests robust to model incompleteness that possess certain optimality properties. We also show that sharp identifying restrictions play a role in constructing such tests in a computationally tractable manner. A framework for analyzing the local asymptotic power of the tests is developed by embedding the least favorable pairs into a model that allows local approximations under the limits of experiments argument. Examples of the hypotheses we consider include those on the presence of strategic interaction effects in discrete games of complete information. Monte Carlo experiments demonstrate the robust performance of the proposed tests.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.04610&r=all
  11. By: Kashif Yousuf; Serena Ng
    Abstract: High dimensional predictive regressions are useful in wide range of applications. However, the theory is mainly developed assuming that the model is stationary with time invariant parameters. This is at odds with the prevalent evidence for parameter instability in economic time series, but theories for parameter instability are mainly developed for models with a small number of covariates. In this paper, we present two $L_2$ boosting algorithms for estimating high dimensional models in which the coefficients are modeled as functions evolving smoothly over time and the predictors are locally stationary. The first method uses componentwise local constant estimators as base learner, while the second relies on componentwise local linear estimators. We establish consistency of both methods, and address the practical issues of choosing the bandwidth for the base learners and the number of boosting iterations. In an extensive application to macroeconomic forecasting with many potential predictors, we find that the benefits to modeling time variation are substantial and they increase with the forecast horizon. Furthermore, the timing of the benefits suggests that the Great Moderation is associated with substantial instability in the conditional mean of various economic series.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.03109&r=all
  12. By: Du Nguyen
    Abstract: We introduce a concept of $autoregressive$ (AR)state-space realization that could be applied to all transfer functions $\boldsymbol{T}(L)$ with $\boldsymbol{T}(0)$ invertible. We show that a theorem of Kalman implies each Vector Autoregressive model (with exogenous variables) has a minimal $AR$-state-space realization of form $\boldsymbol{y}_t = \sum_{i=1}^p\boldsymbol{H}\boldsymbol{F}^{i-1}\boldsymbol{G}\boldsymbol{x}_{t-i}+\boldsymbol{\epsilon}_t$ where $\boldsymbol{F}$ is a nilpotent Jordan matrix and $\boldsymbol{H}, \boldsymbol{G}$ satisfy certain rank conditions. The case $VARX(1)$ corresponds to reduced-rank regression. Similar to that case, for a fixed Jordan form $\boldsymbol{F}$, $\boldsymbol{H}$ could be estimated by least square as a function of $\boldsymbol{G}$. The likelihood function is a determinant ratio generalizing the Rayleigh quotient. It is unchanged if $\boldsymbol{G}$ is replaced by $\boldsymbol{S}\boldsymbol{G}$ for an invertible matrix $\boldsymbol{S}$ commuting with $\boldsymbol{F}$. Using this invariant property, the search space for maximum likelihood estimate could be constrained to equivalent classes of matrices satisfying a number of orthogonal relations, extending the results in reduced-rank analysis. Our results could be considered a multi-lag canonical-correlation-analysis. The method considered here provides a solution in the general case to the polynomial product regression model of Velu et. al. We provide estimation examples. We also explore how the estimates vary with different Jordan matrix configurations and discuss methods to select a configuration. Our approach could provide an important dimensional reduction technique with potential applications in time series analysis and linear system identification. In the appendix, we link the reduced configuration space of $\boldsymbol{G}$ with a geometric object called a vector bundle.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.02546&r=all
  13. By: Lisha Lin; Yaqiong Li; Rui Gao; Jianhong Wu
    Abstract: In the paper, the pricing of Quanto options is studied, where the underlying foreign asset and the exchange rate are correlated with each other. Firstly, we adopt Bayesian methods to estimate unknown parameters entering the pricing formula of Quanto options, including the volatility of stock, the volatility of exchange rate and the correlation. Secondly, we compute and predict prices of different four types of Quanto options based on Bayesian posterior prediction techniques and Monte Carlo methods. Finally, we provide numerical simulations to demonstrate the advantage of Bayesian method used in this paper comparing with some other existing methods. This paper is a new application of the Bayesian methods in the pricing of multi-asset options.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.04075&r=all
  14. By: Michael C. Tseng
    Abstract: We obtain an elementary invariance principle for multi-dimensional Brownian sheet where the underlying random fields are not necessarily independent or stationary. Possible applications include unit-root tests for spatial as well as panel data models.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.02577&r=all
  15. By: Masayuki Sawada
    Abstract: In the context of a randomized experiment with non-compliance, I identify treatment effects without exclusion restrictions. Instead of relying on specific experimental designs, I exploit a baseline survey which is commonly available in randomized control trials. I show the identification of the average treatment effect on the treated (ATT) as well as the local average treatment effect (LATE) assuming that a baseline variable maintains similar rank orders as the control outcome. I then apply this strategy to a microcredit experiment with one-sided non-compliance to identify the ATT. In microcredit studies, a direct effect of the treatment assignment has been a threat to identification of the ATT based on an IV strategy. I find the IV estimate of log revenue for the ATT is 2.3 times larger than my preferred estimate of log revenue. R package ptse is available for this analysis.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.03204&r=all
  16. By: David Pacini
    Abstract: This note considers the problem of constructing an asymptotically normal statistic for the value function of a convex stochastic minimization programme, which may have more than one minimizer. It introduces the proximal statistic using a recursive estimator of one of the minimizers. The use of this statistic is illustrated by extending an existing selection test for point-identifying parametric models to the set-identifying case
    Keywords: Set Identification; Proximal algorithm
    Date: 2019–09–01
    URL: http://d.repec.org/n?u=RePEc:bri:uobdis:19/718&r=all
  17. By: Bernd Funovits; Alexander Braumann
    Abstract: We generalize well-known results on structural identifiability of vector autoregressive models (VAR) to the case where the innovation covariance matrix has reduced rank. Structural singular VAR models appear, for example, as solutions of rational expectation models where the number of shocks is usually smaller than the number of endogenous variables, and as an essential building block in dynamic factor models. We show that order conditions for identifiability are misleading in the singular case and provide a rank condition for identifiability of the noise parameters. Since the Yule-Walker equations may have multiple solutions, we analyze the effect of restrictions on the system parameters on over- and underidentification in detail and provide easily verifiable conditions.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.04096&r=all

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.