nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒03‒12
twenty-one papers chosen by
Sune Karlsson
Örebro universitet

  1. Inference Without Smoothing for Large Panels with Cross- Sectional and Temporal Dependence By Javier Hidalgo; Marcia M Schafgans
  2. Model misspecification and bias for inverse probability weighting and doubly robust estimators By Waernbaum, Ingeborg; Pazzagli, Laura
  3. Asymptotic Theory for Clustered Samples By Bruce E. Hansen; Seojeong Jay Lee
  4. Using binary paradata to correct for measurement error in survey data analysis By Da Silva, Damião Nóbrega; Skinner, Chris J.; Kim, Jae Kwang
  5. Bayesian Estimation of the Complier Average Casual Effect By van Hasselt, Martijn; Ferland, Timothy; Bray, Jeremy; Aldridge, Arnie
  6. A Bootstrap Approach for Bandwidth Selection in Estimating Conditional Efficiency Measures By Luiza Badin; Cinzia Daraio; Léopold Simar
  7. Asymptotic Theory and Wild Bootstrap Inference with Clustered Errors By Antoine A. Djogbenou; James G. MacKinnon; Morten Orregard Nielsen
  8. Instrument-based estimation with binarized treatments: Issues and tests for the exclusion restriction By Eckhoff Andresen, Martin; Huber, Martin
  9. A novel approach to modelling the distribution of financial returns By Yuzhi Cai; Guodong Li
  10. Synthetic Control Estimation Beyond Case Studies Does the Minimum Wage Reduce Employment? By David Powell
  11. Inference with Correlated Clusters By David Powell
  12. Forecasting Stock Returns: A Predictor-Constrained Approach By Davide Pettenuzzo; Zhiyuan Pan; Yudong Wang
  13. A Class of Generalized Dynamic Correlation Models By He, Zhongfang
  14. Testing for high-dimensional white noise using maximum cross correlations By Chang, Jinyuan; Yao, Qiwei; Zhou, Wen
  15. On the iterated estimation of dynamic discrete choice games By Federico Bugni; Jackson Bunting
  16. Nonparametric identification of unobserved technological heterogeneity in production By Laurens Cherchye; Thomas Demuynck; Bram De Rock; Marijn Verschelde
  17. Tests on asymmetry for ordered categorical variables By Klein, Ingo; Doll, Monika
  18. The threshold GARCH model: estimation and density forecasting for financial returns By Yuzhi Cai; Julian Stander
  19. "Multiple-lock Dynamic Equicorrelations with Realized Measures, Leverage and Endogeneity" By Yuta Kurose; Yasuhiro Omori
  20. Likelihood corrections for two-way models By Koen Jochmans; Taisuke Otsu
  21. Forecasting Inflation Uncertainty in the G7 Countries By Mawuli Segnon; Stelios Bekiros; Bernd Wilfling

  1. By: Javier Hidalgo; Marcia M Schafgans
    Abstract: This paper addresses inference in large panel data models in the presence of both cross-sectional and temporal dependence of unknown form. We are interested in making inferences without relying on the choice of any smoothing parameter as is the case with the often employed HACestimator for the covariance matrix. To that end, we propose a cluster estimator for the asymptotic covariance of the estimators and a valid bootstrap which accommodates the nonparametric nature of both temporal and cross-sectional dependence. Our approach is based on the observation that the spectral representation of the fixed effect panel data model is such that the errors become approximately temporal uncorrelated. Our proposed bootstrap can be viewed as a wild bootstrap in the frequency domain. We present some Monte-Carlo simulations to shed some light on the small sample performance of our inferential procedure and illustrate our results using an empirical example.
    Keywords: Large panel data models, cross-sectional strong-dependence, central Limit Theorems, clustering, discrete Fourier Transformation, nonparametric bootstrap algorithms
    JEL: C12 C13 C23
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:597&r=ecm
  2. By: Waernbaum, Ingeborg (IFAU and Department of statistics, Umeå University); Pazzagli, Laura (Division of Statistics, Department of Economics, University of Perugia)
    Abstract: In the causal inference literature a class of semi-parametric estimators is called robust if the estimator has desirable properties under the assumption that at least one of the working models is correctly specified. A standard example is a doubly robust estimator that specifies parametric models both for the propensity score and the outcome regression. When estimating a causal parameter in an observational study the role of parametric models is often not to be true representations of the data generating process, instead the motivation for their use is to facilitate the adjustment for confounding, for example by reducing the dimension of the covariate vector, making the assumption of at least one true model unlikely to hold. In this paper we propose a crude analytical approach to study the large sample bias of estimators when all models are assumed to be approximations of the true data generating process, i.e., all models are misspecified. We apply our approach to three prototypical estimators, two inverse probability weighting (IPW) estimators, using a misspecified propensity score model, and a doubly robust (DR) estimator, using misspecified models for the outcome regression and the propensity score. To compare the consequences of the model misspecifications for the estimators we show conditions for when using normalized weights leads to a smaller bias compared to a simple IPW estimator. To analyze the question of when the use of two misspecified models are better than one we derive necessary and sucient conditions for when the DR estimator has a smaller bias than the simple IPW estimator and when it has a smaller bias than the IPW estimator with normalized weights. For most conditions in the comparisons, the covariance between the propensity score model error and the conditional outcomes plays an important role. The results are illustrated in a simulation study.
    Keywords: average causal effects; comparing biases; propensity score; robustness
    JEL: C14 C18 C52
    Date: 2017–12–09
    URL: http://d.repec.org/n?u=RePEc:hhs:ifauwp:2017_023&r=ecm
  3. By: Bruce E. Hansen (Department of Economics, University of Wisconsin-Madison); Seojeong Jay Lee (School of Economics, UNSW Business School, UNSW Sydney)
    Abstract: We provide a complete asymptotic distribution theory for clustered data with a large number of groups, generalizing the classic laws of large numbers, uniform laws, central limit theory, and clustered covariance matrix estimation. Our theory allows for clustered observations with heterogeneous and unbounded cluster sizes. Our conditions cleanly nest the classical results for i.n.i.d. observations, in the sense that our conditions specialize to the classical conditions under independent sampling. We use this theory to develop a full asymptotic distribution theory for estimation based on linear least-squares, 2SLS, nonlinear MLE, and nonlinear GMM.
    Keywords: clustered data, law of large numbers, central limit theorem, clustered covariance matrix estimation
    JEL: C12 C13 C31 C33 C36
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:swe:wpaper:2017-18&r=ecm
  4. By: Da Silva, Damião Nóbrega; Skinner, Chris J.; Kim, Jae Kwang
    Abstract: Paradata refers here to data at unit level on an observed auxiliary variable, not usually of direct scientific interest, which may be informative about the quality of the survey data for the unit. There is increasing interest among survey researchers in how to use such data. Its use to reduce bias from nonresponse has received more attention so far than its use to correct for measurement error. This paper considers the latter with a focus on binary paradata indicating the presence of measurement error. A motivating application concerns inference about a regression model, where earnings is a covariate measured with error and whether a respondent refers to pay records is the paradata variable. We specify a parametric model allowing for either normally or t-distributed measurement errors and discuss the assumptions required to identify the regression coefficients. We propose two estimation approaches which take account of complex survey designs: pseudo-maximum likelihood estimation and parametric fractional imputation. These approaches are assessed in a simulation study and are applied to a regression of a measure of deprivation given earnings and other covariates using British Household Panel Survey data. It is found that the proposed approach to correcting for measurement error reduces bias and improves on the precision of a simple approach based on accurate observations. We outline briefly possible extensions to uses of this approach at earlier stages in the survey process. Supplemental materials are available online.
    Keywords: auxiliary survey information; complex sampling; fractional imputation; pseudo maximum likelihood
    JEL: C1
    Date: 2016–08–18
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:64763&r=ecm
  5. By: van Hasselt, Martijn (University of North Carolina at Greensboro, Department of Economics); Ferland, Timothy (University of North Carolina at Greensboro, Department of Economics); Bray, Jeremy (University of North Carolina at Greensboro, Department of Economics); Aldridge, Arnie (RTI International)
    Abstract: We analyze two cases of randomized experiments with noncompliance. In the first case, individuals in the control group do not have access to the treatment and non- compliance only occurs in the treatment group. In the second case, which is com- mon in clinical studies, individuals in the control group are given a placebo. In this case, noncompliance occurs in both the treatment and control group. We present a Bayesian method-of-moments approach for estimating the complier average causal ef- fect (CACE). This procedure is valid under weak exclusion restrictions. This approach is contrasted with Gibbs sampling and data augmentation in a parametric model. The various procedures are evaluated in a simulation experiment and applied to clinical trial data from the COMBINE study.
    Keywords: Noncompliance; casual effects; methods of moments; principal stratification
    JEL: C11 C14 C21 C26
    Date: 2017–12–21
    URL: http://d.repec.org/n?u=RePEc:ris:uncgec:2017_014&r=ecm
  6. By: Luiza Badin (Department of Applied Mathematics, Bucharest University of Economic Studies and Gh. Mihoc-C. Iacob Institute); Cinzia Daraio (Department of Computer, Control and Management Engineering Antonio Ruberti (DIAG), University of Rome La Sapienza, Rome, Italy); Léopold Simar (Institut de Statistique, Biostatistique et de Sciences Actuarielles, Universite´ Catholique de Louvain, Voie du)
    Abstract: Conditional efficiency measures are needed when the production process does not depend only on the inputs and outputs, but may be influenced by external factors and/or environmental variables (Z). They are estimated by means of a nonparametric estimator of the conditional distribution function of the inputs and outputs, conditionally on values of Z. For doing this, smoothing procedures and smoothing parameters, the bandwidths, are involved. So far, Least Squares Cross Validation (LSCV) methods have been used, which have been proven to provide bandwidths with optimal rates for estimating conditional distributions. In efficiency analysis, the main interest is in the estimation of the conditional efficiency score, which typically depends on the boundary of the support of the distribution and not on the full conditional distribution. In this paper, we show indeed that the rate for the bandwidths which is optimal for estimating conditional distributions, may not be optimal for the estimation of the efficiency scores. We propose hence a new approach based on the bootstrap which overcomes these difficulties. We analyze and compare, through Monte Carlo simulations, the performances of LSCV techniques with our bootstrap approach in ï¬ nite samples. As expected, our bootstrap approach shows generally better performances and is more robust to the various Monte Carlo scenarios analyzed. We provide in an Appendix the Matlab code performing our experiments.
    Keywords: Data Envelopment Analysis (DEA)/Free Disposal Hull (FDH); Conditional Efficiency ; Bandwidth ; Bootstrap ; Monte Carlo
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:aeg:report:2018-02&r=ecm
  7. By: Antoine A. Djogbenou (Queen's University); James G. MacKinnon (Queen's University); Morten Orregard Nielsen (Queen's University)
    Abstract: We study asymptotic inference based on cluster-robust variance estimators for regression models with clustered errors, focusing on the wild cluster bootstrap and the ordinary wild bootstrap. We state conditions under which both asymptotic and bootstrap tests and confidence intervals will be asymptotically valid. These conditions put limits on the rates at which the cluster sizes can increase as the number of clusters tends to infinity. To include power in the analysis, we allow the data to be generated under sequences of local alternatives. Under a somewhat stronger set of conditions, we also derive formal Edgeworth expansions for the asymptotic and bootstrap test statistics. Simulation experiments illustrate the theoretical results, and the Edgeworth expansions explain the overrejection of the asymptotic test and shed light on the choice of auxiliary distribution for the wild bootstrap.
    Keywords: clustered data, cluster-robust variance estimator, CRVE, Edgeworth expansion, inference, wild bootstrap, wild cluster bootstrap
    JEL: C15 C21 C23
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1399&r=ecm
  8. By: Eckhoff Andresen, Martin; Huber, Martin
    Abstract: When estimating local average and marginal treatment effects using instrumental variables (IV), multivalued endogenous treatments are frequently binarized based on a specific threshold in treatment support. However, such binarization introduces a violation of the IV exclusion if (i) the IV affects the multivalued treatment within support areas below and/or above the threshold and (ii) such IV-induced changes in the multivalued treatment affect the outcome. We discuss assumptions that satisfy the IV exclusion restriction with the binarized treatment and permit identifying the average effect of (i) the binarized treatment and (ii) unit-level increases in the original multivalued treatment among specific compliers. We derive testable implications of these assumptions and propose tests, which we apply to the estimation of the returns to (binary) college graduation instrumented by college proximity.
    Keywords: Instrumental variable; LATE; binarized treatment; test; exclusion restriction; MTE
    JEL: C12 C21 C26
    Date: 2018–03–01
    URL: http://d.repec.org/n?u=RePEc:fri:fribow:fribow00492&r=ecm
  9. By: Yuzhi Cai (School of Management, Swansea University); Guodong Li (University of Hong Kong)
    Abstract: We develop a novel quantile function threshold GARCH model for studying the distribution function, rather than the volatility function, of financial returns that follow a threshold GARCH model. We propose a Bayesian method to do estimation and forecasting simultaneously, which allows us to handle multiple thresholds easily and ensures the forecasts can take account of the variation of model parameters. We apply the method to simulated data and Nasdaq returns. We show that our model is robust to model specification errors and outperforms some commonly used threshold GARCH models.
    Keywords: Density forecasts, financial returns, quantile function, threshold GARCH
    JEL: C10 C51 C53
    Date: 2018–02–27
    URL: http://d.repec.org/n?u=RePEc:swn:wpaper:2018-22&r=ecm
  10. By: David Powell
    Abstract: Panel data are often used in empirical work to account for additive fixed time and unit effects. More recently, the synthetic control estimator relaxes the assumption of additive fixed effects for case studies, using pre-treatment outcomes to create a weighted average of other units which best approximate the treated unit. The synthetic control estimator is currently limited to case studies in which the treatment variable can be represented by a single indicator variable. Applying this estimator more generally, such as applications with multiple treatment variables or a continuous treatment variable, is problematic. This paper generalizes the case study synthetic control estimator to permit estimation of the effect of multiple treatment variables, which can be discrete or continuous. The estimator jointly estimates the impact of the treatment variables and creates a synthetic control for each unit. Additive fixed effect models are a special case of this estimator. Because the number of units in panel data and synthetic control applications is often small, I discuss an inference procedure for fixed N. The estimation technique generates correlations across clusters so the inference procedure will also account for this dependence. Simulations show that the estimator works well even when additive fixed effect models do not. I estimate the impact of the minimum wage on the employment rate of teenagers. I estimate an elasticity of -0.44, substantially larger than estimates generated using additive fixed effect models, and reject the null hypothesis that there is no effect.
    Keywords: synthetic control estimation, finite inference, minimum wage, teen employment, panel data, interactive fixed effects, correlated clusters
    JEL: C33 J23 J31
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:ran:wpaper:wr-1142&r=ecm
  11. By: David Powell
    Abstract: This paper introduces a method which permits valid inference given a finite number of heterogeneous, correlated clusters. Many inference methods assume clusters are asymptotically independent or model dependence across clusters as a function of a distance metric. With panel data, these restrictions are unnecessary. This paper relies on a test statistic using the mean of the cluster-specific scores normalized by the variance and simulating the distribution of this statistic. To account for cross-cluster dependence, the relationship between each cluster is estimated, permitting the independent component of each cluster to be isolated. The method is simple to implement, can be employed for linear and nonlinear estimators, places no restrictions on the strength of the correlations across clusters, and does not require prior knowledge of which clusters are correlated or even the existence of independent clusters. In simulations, the procedure rejects at the appropriate rate even in the presence of highly-correlated clusters.
    Keywords: Finite Inference, Correlated Clusters, Fixed Effects, Panel Data, Hypothesis
    JEL: C12 C21 C23 C33
    Date: 2017–06
    URL: http://d.repec.org/n?u=RePEc:ran:wpaper:wr-1137-1&r=ecm
  12. By: Davide Pettenuzzo (Brandeis University); Zhiyuan Pan (Southwestern University of Finance and Economics, Institute of Chinese Financial Studies); Yudong Wang (School of Economics and Management, Nanjing University of Science, Technology, and Economics)
    Abstract: We develop a novel method to impose constraints on univariate predictive regressions of stock returns. Unlike the previous approaches in the literature, we implement our constraints directly on the predictor, setting it at zero whenever its value falls below the variable's past 12-month high. Empirically, we find that relative to standard unconstrained predictive regressions, our approach leads tosignificantly larger forecasting gains, both in statistical and economic terms. We also show how a simple equal-weighted combination of the constrained forecasts leads to further improvements in forecast accuracy, with predictions that are more precise than those obtained either using the Campbell and Thompson (2008) or Pettenuzzo, Timmermann, and Valkanov (2014) methods. Subsample analysis and a large battery of robustness checks confirm that these findings are robust to the presence of model instabilities and structural breaks.
    Keywords: Equity premium; Predictive regressions; Predictor constraints; 24-month high and low; Model combinations
    JEL: C11 C22 G11 G12
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:brd:wpaper:116r&r=ecm
  13. By: He, Zhongfang
    Abstract: This paper proposes a class of parametric correlation models that apply a two-layer autoregressive-moving-average structure to the dynamics of correlation matrices. The proposed model contains the Dynamic Conditional Correlation model of Engle (2002) and the Varying Correlation model of Tse and Tsui (2002) as special cases and offers greater flexibility in a parsimonious way. Performance of the proposed model is illustrated in a simulation exercise and an application to the U.S. stock indices.
    Keywords: ARMA, Bayes, MCMC, multivariate GARCH, time series
    JEL: C01 C11 C13 C58
    Date: 2018–02–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:84820&r=ecm
  14. By: Chang, Jinyuan; Yao, Qiwei; Zhou, Wen
    Abstract: We propose a new omnibus test for vector white noise using the maximum absolute autocorrelations and cross-correlations of the component series. Based on an approximation by the L[infinity]-norm of a normal random vector, the critical value of the test can be evaluated by bootstrapping from a multivariate normal distribution. In contrast to the conventional white noise test, the new method is proved to be valid for testing the departure from white noise that is not independent and identically distributed. We illustrate the accuracy and the power of the proposed test by simulation, which also shows that the new test outperforms several commonly used methods including, for example, the Lagrange multiplier test and the multivariate Box–Pierce portmanteau tests, especially when the dimension of time series is high in relation to the sample size. The numerical results also indicate that the performance of the new test can be further enhanced when it is applied to pre-transformed data obtained via the time series principal component analysis proposed by Chang, Guo and Yao (arXiv:1410.2323). The proposed procedures have been implemented in an 'R' package.
    Keywords: autocorrelation; normal approximation; parametric bootstrap; portmanteau test; time series principal component analysis; vector white noise
    JEL: C1
    Date: 2017–02–18
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:68531&r=ecm
  15. By: Federico Bugni; Jackson Bunting
    Abstract: We study the asymptotic properties of a class of estimators of the structural parameters in dynamic discrete choice games. We consider K-stage policy iteration (PI) estimators, where K denotes the number of policy iterations employed in the estimation. This class nests several estimators proposed in the literature. By considering a "maximum likelihood" criterion function, our estimator becomes the K-ML estimator in Aguirregabiria and Mira (2007). By considering a "minimum distance" criterion function, it defines a new K-MD estimator, which is an iterative version of the estimators in Pesendorfer and Schmidt-Dengler (2008). First, we establish that the K-ML estimator is consistent and asymptotically normal for any K. This complements findings in Aguirregabiria and Mira (2007). Furthermore, we show that the asymptotic variance of the K-ML estimator can exhibit arbitrary patterns as a function K. Second, we establish that the K-MD estimator is consistent and asymptotically normal for any K. For a specific weight matrix, the K-MD estimator has the same asymptotic distribution as the K-ML estimator. Our main result provides an optimal sequence of weight matrices for the K-MD estimator and shows that the optimally weighted K-MD estimator has an asymptotic distribution that is invariant to K. This new result is especially unexpected given the findings in Aguirregabiria and Mira (2007) for K-ML estimators. Our main result implies two new and important corollaries about the optimal 1-MD estimator (derived by Pesendorfer and Schmidt-Dengler (2008)). First, the optimal 1-MD estimator is optimal in the class of K-MD estimators for all K. In other words, additional policy iterations do not provide asymptotic efficiency gains relative to the optimal 1-MD estimator. Second, the optimal 1-MD estimator is more or equally asymptotically efficient than any K-ML estimator for all K.
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1802.06665&r=ecm
  16. By: Laurens Cherchye (Department of economics, University of Leuven. E. Sabbelaan 53, 8500 Kortrijk, Belgium); Thomas Demuynck (ECARES, Université Libre de Bruxelles. avenue F. D. Roosevelt 50, CP 114, 1050 Brussels, Belgium); Bram De Rock (ECARES, Université Libre de Bruxelles. avenue F. D. Roosevelt 50, CP 114, 1050 Brussels, Belgium); Marijn Verschelde (Department of Economics and Quantitative Methods, IÉSEG School of Management, LEM (UMR-CNRS 9221), and Department of Economics, KULeuven)
    Abstract: We propose a novel nonparametric method for the structural identification of unobserved technological heterogeneity in production. We assume cost minimization as the firms' behavioral objective, and we model unobserved heterogeneity as an unobserved productivity factor on which firms condition the input demand of the observed inputs. Our model of unobserved technological differences can equivalently be represented in terms of unobserved\latent capital" that guarantees data consistency with our behavioral assumption, and we argue that this avoids a simultaneity bias in a natural way. Our empirical application to Belgian manufacturing data shows that our method allows for drawing strong and robust conclusions, despite its nonparametric orientation. For example, our results pinpoint a clear link between international exposure and technological heterogeneity and show that primary inputs are in the considered sectors substituted for materials rather than for technology.
    Keywords: production behavior, unobserved heterogeneity, cost minimization, nonparametric identification, simultaneity bias, latent capital, manufacturing
    JEL: C14 D21 D22 D24
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:nbb:reswpp:201802-335&r=ecm
  17. By: Klein, Ingo; Doll, Monika
    Abstract: Skewness is a well-established statistical concept for continuous and to a lesser extent for discrete quantitative statistical variables. However, for ordered categorical variables almost no literature concerning skewness exists, although this type of variables is common for behavioral, educational, and social sciences. Suitable measures of skewness for ordered categorical variables have to be invariant with respect to the group of strictly increasing, continuous transformations. Therefore, they have to depend on the corresponding maximal-invariants. Based on these maximal-invariants we propose a new class of skewness functionals, show that members of this class preserve a suitable ordering of skewness and derive the asymptotic distribution of the corresponding skewness statistic. Finally, we show the good power behavior of the corresponding skewness tests and illustrated these tests by applying real data examples.
    Keywords: Ordered Categorical Variables,Skewness Analysis,Skewness Ordering,Maximalinvariants
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:zbw:iwqwdp:032018&r=ecm
  18. By: Yuzhi Cai (School of Management, Swansea University); Julian Stander (Plymouth University)
    Abstract: This paper develops a novel density forecasting method for financial time series following a threshold GARCH model that does not require the estimation of the model itself. Instead, Bayesian inference is performed about an induced multiple threshold one-step ahead value-at-risk process at a single quantile level. This is achieved by a quasi-likelihood approach that uses quantile information. We describe simulation studies that provide insight into our method and illustrate it using empirical work on market returns. The results show that our forecasting method outperforms some benchmark models for density forecasting of financial returns.
    Keywords: Density forecasting, multiple thresholds, one-step ahead value-at-risk (VaR), quantile regression, quasi-likelihood.
    JEL: C1 C5
    Date: 2018–02–27
    URL: http://d.repec.org/n?u=RePEc:swn:wpaper:2018-23&r=ecm
  19. By: Yuta Kurose (Osaka University); Yasuhiro Omori (Faculty of Economics, The University of Tokyo)
    Abstract: The single equicorrelation structure among several daily asset returns is promising and at- tractive to reduce the number of parameters in multivariate stochastic volatility models. However, such an assumption may not be realistic as the number of assets may increase, for example, in the portfolio optimizations. As a solution to this oversimplication, the multiple- block equicorrelation structure is proposed for high dimensional financial time series, where we assume common correlations within a group of asset returns, but allow different correla- tions for different groups. The realized volatilities and realized correlations are also jointly modelled to obtain stable and accurate estimates of parameters, latent variables and lever- age effects. Using a state space representation, we describe an efficient estimation method of Markov chain Monte Carlo simulation. Illustrative examples are given using simulated data, and empirical studies using U.S. daily stock returns data show that our proposed model outperforms other competing models in portfolio performances.
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2018cf1075&r=ecm
  20. By: Koen Jochmans; Taisuke Otsu
    Abstract: The use of two-way fixed-effect models is widespread. The presence of incidental parameter bias, however, invalidates statistical inference based on the likelihood. In this paper we consider modifications to the (profile) likelihood that yield asymptotically unbiased estimators as well as likelihood-ratio and score tests with correct size. The modifications are widely applicable and easy to implement. Our examples illustrate that the modifications can lead to dramatic improvements relative to the maximum likelihood method both in terms of point estimation and inference.
    Keywords: asymptotic bias, bias correction, fixed effects, information bias, modified profile likelihood, panel data, MCMC, penalization, rectangular-array asymptotics
    JEL: C12 C14
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:598&r=ecm
  21. By: Mawuli Segnon; Stelios Bekiros; Bernd Wilfling
    Abstract: There is substantial evidence that inflation rates are characterized by long memory and nonlinearities. In this paper, we introduce a long-memory Smooth Transition AutoRegressive Fractionally Integrated Moving Average-Markov Switching Multifractal specification [STARFIMA(p; d; q)-MSM(k)] for modeling and forecasting inflation uncertainty. We first provide the statistical properties of the process and investigate the finite-sample properties of the maximum likelihood estimators through simulation. Second, we evaluate the out-of-sample forecast performance of the model in forecasting inflation uncertainty in the G7 countries. Our empirical analysis demonstrates the superiority of the new model over the alternative STARFIMA(p; d; q)-GARCH-type models in forecasting inflation uncertainty.
    Keywords: Inflation uncertainty, Smooth transition, Multifractal processes, GARCH processes
    JEL: C22 E31
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:cqe:wpaper:7118&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.