nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒12‒03
twelve papers chosen by
Sune Karlsson
Örebro universitet

  1. A Time-Varying Parameter Model for Local Explosions By Francisco (F.) Blasques; Siem Jan (S.J.) Koopman; Marc Nientker
  2. Reconsideration of a simple approach to quantile regression for panel data By Galina Besstremyannaya; Sergei Golovan
  3. Optimal Estimation with Complete Subsets of Instruments By Seojeong Lee; Youngki Shin
  4. Likelihood based inference for an Identifiable Fractional Vector Error Correction Model By Federico Carlini; Katarzyna (K.A.) Lasak
  5. Two-step estimation of models between latent classes and external variables By Bakk, Zsuzsa; Kuha, Jouni
  6. Generalized Dynamic Factor Models and Volatilities: Consistency, Rates, and Prediction Intervals By Matteo Barigozzi; Marc Hallin
  7. Inference in Bayesian Proxy-SVARs By Arias, Jonas E.; Rubio-Ramirez, Juan F.; Waggoner, Daniel F.
  8. Nonparametric Estimation of Additive Model with Errors-in-Variables By Hao Dong; Taisuke Otsu
  9. When Can We Determine the Direction of Omitted Variable Bias of OLS Estimators? By Deepankar Basu
  10. Model instability in predictive exchange rate regressions By Hauzenberger, Niko; Huber, Florian
  11. THE EFFECTS OF AUTOCORRELATION AND NUMBER OF REPEATED MEASURES ON GLMM ROBUSTNESS WITH ORDINAL DATA By Roser Bono; María J. Blanca; Rafael Alarcón; Jaume Arnau
  12. Hurst exponents and delampertized fractional Brownian motions By Matthieu Garcin

  1. By: Francisco (F.) Blasques (VU Amsterdam); Siem Jan (S.J.) Koopman (VU Amsterdam); Marc Nientker (VU Amsterdam)
    Abstract: Locally explosive behavior is observed in many economic and financial time series when bubbles are formed. We introduce a time-varying parameter model that is capable of describing this behavior in time series data. Our proposed model can be used to predict the emergence, existence and burst of bubbles. We adopt a flexible observation driven model specification that allows for different bubble shapes and behavior. We establish stationarity, ergodicity, and bounded moments of the data generated by our model. Furthermore, we obtain the consistency and asymptotic normality of the maximum likelihood estimator. Given the parameter estimates, our filter is capable of extracting the unobserved bubble process from observed data. We study finite-sample properties of our estimator through a Monte Carlo simulation study. Finally, we show that our model compares well with noncausal models in a financial application concerning the Bitcoin/US dollar exchange rate.
    Keywords: bubbles; observation driven models; noncausal models; stationary; ergodic; consistency; asymptotic normality; exchange rates
    JEL: C22 C58 G10
    Date: 2018–11–16
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20180088&r=ecm
  2. By: Galina Besstremyannaya (Centre for Economic and Financial Research at New Economic School); Sergei Golovan (New Economic School)
    Abstract: The note discusses a fallacy in the approach proposed by Ivan Canay (2011, The Econometrics Journal) for constructing a computationally simple two-step estimator in a quantile regression model with quantile-independent fixed effects. We formally prove that the estimator gives an incorrect inference for the constant term due to violation of the assumption about additive expansion of the first-step estimator, which requires the independence of its terms. Our simulations show that Canay's confidence intervals for the constant term are wrong. Finally, we focus on the fact that finding a sqrt(nT) consistent within estimator, as required by Canay's procedure, may be problematic. We provide an example of a model, for which we formally prove the non-existence of such an estimator.
    Keywords: quantile regression, panel data, fixed effects, inference
    JEL: C21 C23
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:cfr:cefirw:w0248&r=ecm
  3. By: Seojeong Lee; Youngki Shin
    Abstract: In this paper we propose a two-stage least squares (2SLS) estimator whose first stage is based on the equal-weight average over a complete subset. We derive the approximate mean squared error (MSE) that depends on the size of the complete subset and characterize the proposed estimator based on the approximate MSE. The size of the complete subset is chosen by minimizing the sample counterpart of the approximate MSE. We show that this method achieves the asymptotic optimality. To deal with weak or irrelevant instruments, we generalize the approximate MSE under the presence of a possibly growing set of irrelevant instruments, which provides useful guidance under weak IV environments. The Monte Carlo simulation results show that the proposed estimator outperforms alternative methods when instruments are correlated with each other and there exists high endogeneity. As an empirical illustration, we estimate the logistic demand function in Berry, Levinsohn, and Pakes (1995).
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1811.08083&r=ecm
  4. By: Federico Carlini (USI, Lugano); Katarzyna (K.A.) Lasak (University of Amsterdam)
    Abstract: We consider the Fractional Vector Error Correction model proposed in Avarucci (2007), which is characterized by a richer lag structure than the models proposed in Granger (1986) and Johansen (2008, 2009). In particular, we discuss the properties of the model of Avarucci (2007) (FECM) in comparison to the model of Johansen (2008, 2009) (FCVAR). Both models generate the same class of processes, but the properties of the two models are different. First, opposed to the model of Johansen (2008, 2009), the model of Avarucci has a convenient nesting structure, which allows for testing the number of lags and the cointegration rank exactly in the same way as in the standard I(1) cointegration framework of Johansen (1995) and hence might be attractive for econometric practice. Second, we find that the model of Avarucci (2007) is almost free from identification problems, contrary to the model of Johansen (2008, 2009) and Johansen and Nielsen (2012), which identification problems are discussed in Carlini and Santucci de Magistris (2017). However, due to a larger number of parameters, the estimation of the FECM model of Avarucci (2007) turns out to be more complicated. Therefore, we propose a 4-step estimation procedure for this model that is based on the switching algorithm employed in Carlini and Mosconi (2014), together with the GLS procedure of Mosconi and Paruolo (2014). We check the performance of the proposed estimation procedure in finite samples by means of a Monte Carlo experiment and we prove the asymptotic distribution of the estimators of all the parameters. The solution of the model has been previously derived in Avarucci (2007), while testing for the rank has been discussed in Lasak and Velasco (for cointegration strength >0.5) and Avarucci and Velasco (for cointegration strength
    Keywords: Error correction model; Gaussian VAR model; Fractional Cointegration; Estimation algorithm; Maximum likelihood estimation; Switching Algorithm; Reduced Rank Regression.
    JEL: C13 C32
    Date: 2018–11–16
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20180085&r=ecm
  5. By: Bakk, Zsuzsa; Kuha, Jouni
    Abstract: We consider models which combine latent class measurement models for categorical latent variables with structural regression models for the relationships between the latent classes and observed explanatory and response variables. We propose a two-step method of estimating such models. In its first step the measurement model is estimated alone, and in the second step the parameters of this measurement model are held fixed when the structural model is estimated. Simulation studies and applied examples suggest that the two-step method is an attractive alternative to existing one-step and three-step methods. We derive estimated standard errors for the two-step estimates of the structural model which account for the uncertainty from both steps of the estimation, and show how the method can be implemented in existing software for latent variable modelling
    JEL: C1
    Date: 2017–11–17
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:85161&r=ecm
  6. By: Matteo Barigozzi; Marc Hallin
    Abstract: Volatilities, in high-dimensional panels of economic time series with a dynamic factor structure on the levels or returns, typically also admit a dynamic factor decomposition. A two-stage dynamic factor model method recovering common and idiosyncratic volatility shocks therefore was proposed in Barigozzi and Hallin (2016). By exploiting this two-stage factor approach, we build one-step-ahead conditional prediction intervals for large n×T panels of returns. We provide consistency and consistency rates results for the proposed estimators as both n and T tend to infinity. Finally, we apply our methodology to a panel of asset returns belonging to the S&P100 index in order to compute one-step-ahead conditional prediction intervals for the period 2006-2013. A comparison with the componentwise GARCH (1,1) benchmark (which does not take advantage of cross-sectional information) demonstrates the superiority of our approach, which is genuinely multivariate (and high-dimensional), nonparametric, and model-free.
    Keywords: Volatility, Dynamic Factor Models, Prediction intervals, GARCH
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/278905&r=ecm
  7. By: Arias, Jonas E. (Federal Reserve Bank of Philadelphia); Rubio-Ramirez, Juan F. (Federal Reserve Bank of Atlanta); Waggoner, Daniel F. (Federal Reserve Bank of Atlanta)
    Abstract: Motivated by the increasing use of external instruments to identify structural vector autoregressions SVARs), we develop algorithms for exact finite sample inference in this class of time series models, commonly known as proxy SVARs. Our algorithms make independent draws from the normal-generalized-normal family of conjugate posterior distributions over the structural parameterization of a proxy-SVAR. Importantly, our techniques can handle the case of set identification and hence they can be used to relax the additional exclusion restrictions unrelated to the external instruments often imposed to facilitate inference when more than one instrument is used to identify more than one equation as in Mertens and Montiel-Olea (2018).
    Keywords: SVARs; External Instruments; Importance Sampler
    JEL: C15 C32
    Date: 2018–11–05
    URL: http://d.repec.org/n?u=RePEc:fip:fedpwp:18-25&r=ecm
  8. By: Hao Dong; Taisuke Otsu
    Abstract: In estimation of nonparametric additive models, conventional methods, such as backfitting and series approximation, cannot be applied when measurement errors are present in covariates. We propose an estimator for such models by extending Horowitz and Mammen's (2004) two stage estimator for the errors-in-variables case. In the first stage, to adept to the additive structure, we use a series method together with a ridge approach to deal with ill-posedness brought by the mismeasurement. The uniform convergence rate for the first stage estimator is derived. To establish the limiting distribution, we consider the second stage estimator obtained by the one-step backfitting with a deconvolution kernel based on the first stage estimator.
    Keywords: Additive model, Measurement error, Deconvolution
    JEL: C14 C13
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:600&r=ecm
  9. By: Deepankar Basu (Department of Economics, University of Massachusetts - Amherst)
    Abstract: Omitted variable bias (OVB) of OLS estimators is a serious and ubiquitous problem in social science research. Often researchers use the direction of the bias in substantive arguments or to motivate estimation methods to deal with the bias. This paper offers a geometric interpretation of OVB that highlights the difficulty in ascertaining its sign in any realistic setting and cautions against the use of direction-of-bias arguments. This analysis has implications for comparison of OLS and IV estimators too.
    Keywords: omitted variable bias; ordinary least squares
    JEL: C20
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:ums:papers:2018-16&r=ecm
  10. By: Hauzenberger, Niko (WU Wirtschaftsuniversität Wien); Huber, Florian (University of Salzburg)
    Abstract: In this paper we aim to improve existing empirical exchange rate models by accounting for uncertainty with respect to the underlying structural representation. Within a flexible Bayesian non-linear time series framework, our modeling approach assumes that different regimes are characterized by commonly used structural exchange rate models, with their evolution being driven by a Markov process. We assume a time-varying transition probability matrix with transition probabilities depending on a measure of the monetary policy stance of the central bank at the home and foreign country. We apply this model to a set of eight exchange rates against the US dollar. In a forecasting exercise, we show that model evidence varies over time and a model approach that takes this empirical evidence seriously yields improvements in accuracy of density forecasts for most currency pairs considered.
    Keywords: Empirical exchange rate models; exchange rate fundamentals; Markov switching
    JEL: C30 E32 E52 F31
    Date: 2018–11–21
    URL: http://d.repec.org/n?u=RePEc:ris:sbgwpe:2018_008&r=ecm
  11. By: Roser Bono (University of Barcelona); María J. Blanca (Department of Psychobiology and Behavioral Science Methodology, University of Malaga); Rafael Alarcón (Department of Psychobiology and Behavioral Science Methodology, University of Malaga); Jaume Arnau (Department of Social Psychology and Quantitative Psychology, University of Barcelona)
    Abstract: Longitudinal studies involving ordinal responses are widely conducted in many fields of the education, health and social sciences. In these cases, when units are observed over time, the possibility of auto-correlation between observations on the same subject exists. Therefore the assumption of independence which underlines the generalized linear models is violated. Generalized linear mixed models (GLMMs) accommodate repeated measures data for which the usual assumption of independent observations is untenable, and also accommodate a non-normally distributed dependent variable (i.e. multinomial distribution for ordinal data). Thus, GLMMs constitute a good technique for modelling correlated data and ordinal responses. In this study, for a split-plot design with two groups for the between-subjects factor and five response categories, we investigated empirical Type I error rates in GLMMs. To this end, we used a computer program developed by Wicklin to generate longitudinal ordinal data with SAS/IML. We manipulated the total sample size, the coefficient of variation of the group size, the number of repeated measures, and the values of autocorrelation coefficient. For each combination 5,000 replications were performed at a significance level of .05. The GLIMMIX procedure in SAS was used to fit the mixed-effects models for ordinal responses with multinomial distribution and the Kenward-Roger degrees of freedom adjustment for small samples. The results of simulations showed that the test is robust for group effect under all conditions analysed. For time and interaction effects, however, the robustness depends on the number of repeated measures and autocorrelations values. The test tends to be liberal with high autocorrelation, different values of autocorrelation in each group and large number of repeated measures. To sum up, GLMMs are a good analytical option for correlated ordinal outcomes with few repeated measures, low autocorrelation, and the same autocorrelation between groups.This research was supported by grant PSI2016-78737-P (AEI/FEDER, UE) from the Spanish Ministry of Economy, Industry and Competitiveness.
    Keywords: longitudinal studies, generalized linear mixed models, GLIMMIX, ordinal data, robustness
    JEL: C12 C15 C18
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:7309082&r=ecm
  12. By: Matthieu Garcin (Research Center - Léonard de Vinci Pôle Universitaire - De Vinci Research Center)
    Abstract: The inverse Lamperti transform of a fractional Brownian motion is a stationary process. We determine the empirical Hurst exponent of such a composite process with the help of a regression of the log absolute moments of its increments, at various scales, on the corresponding log scales. This perceived Hurst exponent underestimates the Hurst exponent of the underlying fractional Brownian motion. We thus encounter some time series having a perceived Hurst exponent lower than 1/2, but an underlying Hurst exponent higher than 1/2. This paves the way for short-and medium-term forecasting. Indeed, in such series, mean reversion predominates at high scales, whereas persistence is overriding at lower scales. We propose a way to characterize the Hurst horizon, namely a limit scale between these opposite behaviours. We show that the delampertized fractional Brownian motion, which mixes persistence and mean reversion, is relevant for financial time series, in particular for high-frequency foreign exchange rates. In our sample, the empirical Hurst horizon is always above 1 hour and 23 minutes.
    Keywords: fractional Brownian motion,Hurst exponent,Lamperti transform,Ornstein-Uhlenbeck process,foreign exchange rates
    Date: 2018–11–12
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01919754&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.