nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒11‒26
eighteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Sparse Bayesian vector autoregressions in huge dimensions By Gregor Kastner; Florian Huber
  2. Efficient Estimation of Linear Panel Data Models with Sample Selection and Fixed Effects By Chirok Han; Goeun Lee
  3. Principal Components and Regularized Estimation of Factor Models By Jushan Bai; Serena Ng
  4. Semiparametric Estimation of Structural Functions in Nonseparable Triangular Models By Victor Chernozhukov; Iván Fernández-Val; Whitney Newey; Sami Stouli; Francis Vella
  5. Optimal response and covariate-adaptive biased-coin designs for clinical trials with continuous multivariate or longitudinal responses By Atkinson, Anthony C.; Biswas, Atanu
  6. Posterior Means and Precisions of the Coefficients in Linear Models with Highly Collinear Regressors By M Hashem Pesaran; Ron P Smith
  7. Inference on distribution functions under measurement error By Karun Adusumilli; Taisuke Otsu; Yoon-Jae Whang
  8. Relative error accurate statistic based on nonparametric likelihood By Lorenzo Camponovo; Taisuke Otsu
  9. Common Factors, Trends, and Cycles in Large Datasets By Matteo Barigozzi; Matteo Luciani
  10. Forecasting Mortality: Some Recent Developments By Taku Yamamoto; Hiroaki Chigira
  11. A new multivariate nonlinear time series model for portfolio risk measurement: the threshold copula-based TAR approach By Wong, Shiu Fung; Tong, Howell; Siu, Tak Kuen; Lu, Zudi
  12. A Generalized Factor Model with Local Factors By Simon Freyaldenhoven
  13. Power in High-dimensional testing Problems By Anders Bredahl Kock; David Preinerstorfer
  14. A New Approach Toward Detecting Structural Breaks in Vector Autoregressive Models By Florian Huber; Gregor Kastner; Martin Feldkircher
  15. On group comparisons with logistic regression models By Kuha, Jouni; Mills, Colin
  16. Creaming - and the depletion of resources: A Bayesian data analysis By Lillestøl, Jostein; Sinding-Larsen, Richard
  17. Spurious Principal Components By Franses, Ph.H.B.F.; Janssens, E.
  18. Modelling Occasionally Binding Constraints Using Regime-Switching By Andrew Binning; Junior Maih

  1. By: Gregor Kastner; Florian Huber
    Abstract: We develop a Bayesian vector autoregressive (VAR) model that is capable of handling vast dimensional information sets. Three features are introduced to permit reliable estimation of the model. First, we assume that the reduced-form errors in the VAR feature a factor stochastic volatility structure, allowing for conditional equation-by-equation estimation. Second, we apply a Dirichlet-Laplace prior to the VAR coefficients to cure the curse of dimensionality. Finally, since simulation-based methods are needed to simulate from the joint posterior distribution, we utilize recent innovations to efficiently sample from high-dimensional multivariate Gaussian distributions that improve upon recent algorithms by large margins. In the empirical exercise we apply the model to US data and evaluate its forecasting capabilities.
    Date: 2017–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1704.03239&r=ecm
  2. By: Chirok Han (Department of Economics, Korea University, Seoul, Republic of Korea); Goeun Lee (Department of Economics, Korea University, Seoul, Republic of Korea)
    Abstract: For linear panel data models with endogenous selectivity, the popular pooled ordinary least squares with bias correction and its minimum distance variant can suffer from severe efficiency loss in the presence of large random effects. To resolve this problem, we algebraically derive an efficient estimator based on the moment restrictions used by the pooled ordinary least squares and make the estimator feasible under the conventional error-component assumption. The efficient estimation involves heavy computation, and we propose a convenient suboptimal estimator based on a novel common weighting transformation. We also consider partial and full aggregation of information in pairwise differences, where unobserved fixed effects are completely eliminated. Efficient estimation based on pairwise differences is discussed, and a computationally affordable method of estimating nuisance higher-order moments is proposed. Analytic standard errors are provided for all considered estimators. Simulations suggest that a convenient suboptimal estimator and the fully-aggregated pairwise-differencing estimator exhibit remarkable performances. The methods are applied to estimating earnings equation for married women using the Korean Labor and Income Panel Study data.
    Keywords: Fixed effects, selection bias correction, efficiency, correlated random effects, pairwise differencing
    JEL: C23
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:iek:wpaper:1707&r=ecm
  3. By: Jushan Bai; Serena Ng
    Abstract: It is known that the common factors in a large panel of data can be consistently estimated by the method of principal components, and principal components can be constructed by iterative least squares regressions. Replacing least squares with ridge regressions turns out to have the effect of shrinking the singular values of the common component and possibly reducing its rank. The method is used in the machine learning literature to recover low-rank matrices. We study the procedure from the perspective of estimating a minimum-rank approximate factor model. We show that the constrained factor estimates are biased but can be more efficient in terms of mean-squared errors. Rank consideration suggests a data-dependent penalty for selecting the number of factors. The new criterion is more conservative in cases when the nominal number of factors is inflated by the presence of weak factors or large measurement noise. The framework is extended to incorporate a priori linear constraints on the loadings. We provide asymptotic results that can be used to test economic hypotheses.
    Date: 2017–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1708.08137&r=ecm
  4. By: Victor Chernozhukov; Iván Fernández-Val; Whitney Newey; Sami Stouli; Francis Vella
    Abstract: This paper introduces two classes of semiparametric triangular systems with nonadditively separable unobserved heterogeneity. They are based on distribution and quantile regression modeling of the reduced-form conditional distributions of the endogenous variables. We show that these models are flexible and identify the average, distribution and quantile structural functions using a control function approach that does not require a large support condition. We propose a computationally attractive three-stage procedure to estimate the structural functions where the first two stages consist of quantile or distribution regressions. We provide asymptotic theory and uniform inference methods for each stage. In particular, we derive functional central limit theorems and bootstrap functional central limit theorems for the distribution regression estimators of the structural functions. We illustrate the implementation and applicability of our methods with numerical simulations and an empirical application to demand analysis.
    Keywords: Structural functions, nonseparable models, control function, quantile and distribution regression, semiparametric estimation, uniform inference.
    Date: 2017–11–08
    URL: http://d.repec.org/n?u=RePEc:bri:uobdis:17/690&r=ecm
  5. By: Atkinson, Anthony C.; Biswas, Atanu
    Abstract: Adaptive randomization of the sequential construction of optimum experimental designs is used to derive biased-coin designs for longitudinal clinical trials with continuous responses. The designs, coming from a very general rule, target pre-specified allocation proportions for the ranked treatment effects. Many of the properties of the designs are similar to those of well understood designs for univariate responses. A numerical study illustrates this similarity in a comparison of four designs for longitudinal trials. Designs for multivariate responses can likewise be found, requiring only the appropriate information matrix. Some new results in the theory of optimum experimental design for multivariate responses are presented.
    Keywords: biased-coin design; covariate balance; effective number of observations; ethical allocation; equivalence theorem; multivariate DA-optimality; multivariate loss; power skewed allocation
    JEL: C1
    Date: 2017–09–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:66761&r=ecm
  6. By: M Hashem Pesaran (University of Southern California; Trinity College, Cambridge); Ron P Smith (Birkbeck, University of London)
    Abstract: When there is exact collinearity between regressors, their individual coefficients are not identified, but given an informative prior their Bayesian posterior means are well defined. The case of high but not exact collinearity is more complicated but similar results follow. Just as exact collinearity causes non-identification of the parameters, high collinearity can be viewed as weak identification of the parameters, which we represent, in line with the weak instrument literature, by the correlation matrix being of full rank for a finite sample size T, but converging to a rank deficient matrix as T goes to infinity. This paper examines the asymptotic behaviour of the posterior mean and precision of the parameters of a linear regression model for both the cases of exactly and highly collinear regressors. We show that in both cases the posterior mean remains sensitive to the choice of prior means even if the sample size is sufficiently large, and that the precision rises at a slower rate than the sample size. In the highly collinear case, the posterior means converge to normally distributed random variables whose mean and variance depend on the priors for coefficients and precision. The distribution degenerates to fixed points for either exact collinearity or strong identification. The analysis also suggests a diagnostic statistic for the highly collinear case, which is illustrated with an empirical example.
    Keywords: Bayesian identi?cation, multicollinear regressions, weakly identi?ed regression coefficients, highly collinear regressors.
    JEL: C11 C18
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:bbk:bbkcam:1707&r=ecm
  7. By: Karun Adusumilli; Taisuke Otsu; Yoon-Jae Whang
    Abstract: This paper is concerned with inference on the cumulative distribution function (cdf) ∗ in the classical measurement error model = ∗ + . We show validity of asymptotic and bootstrap approximations for the distribution of the deviation in the sup-norm between the deconvolution cdf estimator of Hall and Lahiri (2008) and ∗ . We allow the density of to be ordinary or super smooth, or to be estimated by repeated measurements. Our approximation results are applicable to various contexts, such as confidence bands for ∗ and its quantiles, and for performing various cdf-based tests such as goodness-of-fit tests for parametric models of densities, two sample homogeneity tests, and tests for stochastic dominance. Simulation and real data examples illustrate satisfactory performance of the proposed methods.
    Keywords: Measurement error, Confidence band, Stochastic dominance
    JEL: C12 C14
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:594&r=ecm
  8. By: Lorenzo Camponovo; Taisuke Otsu
    Abstract: This paper develops a new test statistic for parameters defined by moment conditions that exhibits desirable relative error properties for the approximation of tail area probabilities. Our statistic, called the tilted exponential tilting (TET) statistic, is constructed by estimating certain cumulant generating function under exponential tilting weights. We show that the asymptotic p-value of the TET statistic can provide an accurate approximation to the p-value of an infeasible saddlepoint statistic, which is asymptotically chi-squared distributed with a relative error of order n−1 both in normal and large deviation regions. Numerical results illustrate the accuracy of the proposed TET statistic. Our results cover both just- and over-identified moment condition models.
    Keywords: Nonparametric likelihood, Saddlepoint, Moment condition model
    JEL: C12 C14
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:593&r=ecm
  9. By: Matteo Barigozzi; Matteo Luciani
    Abstract: This paper considers a non-stationary dynamic factor model for large datasets to disentangle long-run from short-run co-movements. We first propose a new Quasi Maximum Likelihood estimator of the model based on the Kalman Smoother and the Expectation Maximisation algorithm. The asymptotic properties of the estimator are discussed. Then, we show how to separate trends and cycles in the factors by mean of eigenanalysis of the estimated non-stationary factors. Finally, we employ our methodology on a panel of US quarterly macroeconomic indicators to estimate aggregate real output, or Gross Domestic Output, and the output gap.
    Keywords: EM Algorithm ; Gross Domestic Output ; Kalman Smoother ; Non-stationary Approximate Dynamic Factor Model ; Output Gap ; Quasi Maximum Likelihood ; Trend-Cycle Decomposition
    JEL: C32 C38 E00
    Date: 2017–11–13
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2017-111&r=ecm
  10. By: Taku Yamamoto (Hitotsubashi University); Hiroaki Chigira (Tohoku University)
    Abstract: Forecasting mortality has been a vital issue in demography and actuarial science. It also has profound implications for pension plan and long-term economic forecasts of the nation. In the present paper we examine various forecasting methods for mortality in the framework of cointegrated time series analysis. The Lee-Carter (LC) method has been regarded as the benchmark for forecasting mortality. However, its forecasting accuracy has been known to be particularly poor for short-term forecasts, while it is well for long-term forecasts. Recently, a new methods called the multivariate time series variance component (MTV) method has been proposed which explicitly satisfies cointegration restrictions of the series. It overcomes weak points of the LC method. In the present paper we propose two new methods. The first one is the modified MTV (mMTV) method which modifies the MTV method in order to get more accurate forecast of the trend component of the method. The second is the all-component Lee-Carter (LCA) method which generalizes the Lee-Carter method, by using all principal components, in order to improve short-term forecasts of the LC method. However, it may be noted that the LCA method does not satisfy cointegration restrictions. We analytically compare forecasting accuracy of the proposed methods with the Lee-Carter method and the MTV method in the framework of cointegrated time series. We further compare them in a Monte Carlo experiment and in an empirical application of forecasting mortality for Japanese male. It is shown that the mMTV method is generally the most accurate in the Monte Carlo experiment and in Japanese data. The MTV method works almost as well. However, since the drift estimator is inefficient, it is slightly less accurate than the mMTV method in some occasions. The forecast accuracy of the LCA method is reasonably high and can be equivalent to the mMTV method in some occasions, but is generally inferior to the MTV method and the mMTV method. As expected, the LC method is the worst among methods examined in the present study. The mMTV method is recommended for practical use.
    Keywords: Time Series ModelsForecasting MethodsCointegrated ProcessMortality.
    JEL: C01 C32 C53
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:5808110&r=ecm
  11. By: Wong, Shiu Fung; Tong, Howell; Siu, Tak Kuen; Lu, Zudi
    Abstract: We propose a threshold copula-based nonlinear time series model for evaluating quantitative risk measures for financial portfolios with a flexible structure to incorporate nonlinearities in both univariate (component) time series and their dependent structure. We incorporate different dependent structures of asset returns over different market regimes, which are manifested in their price levels. We estimate the model parameters by a two-stage maximum likelihood method. Real financial data and appropriate statistical tests are used to illustrate the efficacy of the proposed model. Simulated results for sampling distribution of parameters estimates are given. Empirical results suggest that the proposed model leads to significant improvement of the accuracy of value-at-risk forecasts at the portfolio lev
    Keywords: quantitative risk measures; copulas; multivariate nonlinear time series; threshold principle
    JEL: C10 C32 C51 G32
    Date: 2017–03
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:78515&r=ecm
  12. By: Simon Freyaldenhoven
    Abstract: I extend the theory on factor models by incorporating âlocalâ factors into the model. Local factors affect a decreasing fraction of the observed variables. This implies a continuum of eigenvalues of the covariance matrix, as is commonly observed in applications. I derive conditions under which local factors will be estimated consistently using the common Principal Component Estimator. I further propose a novel class of estimators for the number of factors. Unlike estimators that have been proposed in the past, my estimators use information in the eigenvectors as well as in the eigenvalues. Monte Carlo evidence suggests significant finite sample gains over existing estimators. Empirically I find evidence of local factors in a large panel of US macroeconomic indicators.
    JEL: C38 C52
    Date: 2017–11–16
    URL: http://d.repec.org/n?u=RePEc:jmp:jm2017:pfr361&r=ecm
  13. By: Anders Bredahl Kock; David Preinerstorfer
    Abstract: Fan et al. (2015) recently introduced a remarkable method for increasing asymptotic power of tests in high-dimensional testing problems. If applicable to a given test, their power enhancement principle leads to an improved test that has the same asymptotic size, uniformly non-inferior asymptotic power, and is consistent against a strictly broader range of alternatives than the initially given test. We study under which conditions this method can be applied and show the following: In asymptotic regimes where the dimensionality of the parameter space is fixed as sample size increases, there often exist tests that can not be further improved with the power enhancement principle. When the dimensionality of the parameter space can increase with sample size, however, there typically is a range of "slowly" diverging rates for which every test with asymptotic size smaller than one can be improved with the power enhancement principle. While the latter statement in general does not extend to all rates at which the dimensionality increases with sample size, we give sufficient conditions under which this is the case.
    Keywords: high-dimensional testing problems; power enhancement principle; power enhancement component; asymptotic enhanceability; marginal LAN
    JEL: C12
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/260442&r=ecm
  14. By: Florian Huber; Gregor Kastner; Martin Feldkircher
    Abstract: Incorporating structural changes into time series models is crucial during turbulent economic periods. In this paper, we propose a flexible means of estimating vector autoregressions with time-varying parameters (TVP-VARs) by introducing a threshold process that is driven by the absolute size of parameter changes. This enables us to detect whether a given regression coefficient is constant or time-varying. When applied to a medium-scale macroeconomic US dataset our model yields precise density and turning point predictions, especially during economic downturns, and provides new insights on the changing effects of increases in short-term interest rates over time.
    Date: 2016–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1607.04532&r=ecm
  15. By: Kuha, Jouni; Mills, Colin
    Abstract: It is widely believed that regression models for binary responses are problematic if we want to compare estimated coeffcients from models for different groups or with different explanatory variables. This concern has two forms. The first arises if the binary model is treated as an estimate of a model for an unobserved continuous response, and the second when models are compared between groups which have different distributions of other causes of the binary response. We argue that these concerns are usually misplaced. The first of them is only relevant if the unobserved continuous response is really the subject of substantive interest. If it is, the problem should be addressed through better measurement of this response. The second concern refers to a situation which is unavoidable but unproblematic, in that causal effects and descriptive associations are inherently group-dependent and can be compared as long as they are correctly estimated.
    JEL: C1
    Date: 2017–08–25
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:84163&r=ecm
  16. By: Lillestøl, Jostein (Dept. of Business and Management Science, Norwegian School of Economics); Sinding-Larsen, Richard (Dept. of Geoscience and Petroleum, Norwegian University of Science and Technology)
    Abstract: This paper considers sampling in proportion to size from a partly unknown distribution. The applied context is the exploration for undiscovered resources, like oil accumulations in different deposits, where the most promising deposits are likely to be drilled first, based on some geologic size indicators (“creaming”). A Log-normal size model with exponentially decaying creaming factor turns out to have nice analytical features in this context, and fits well available data, as demonstrated in Lillestøl and Sinding-Larsen (2017). This paper is a Bayesian follow-up, which provides posterior parameter densities and predictive densities of future discoveries, in the case of uninformative prior distributions. The theory is applied to the prediction of remaining petroleum accumulations to be found on the mature part of the Norwegian Continental Shelf.
    Keywords: Log-normal distribution; sampling proportional to size; resource prediction
    JEL: C00 C10 C11 C13
    Date: 2017–11–16
    URL: http://d.repec.org/n?u=RePEc:hhs:nhhfms:2017_016&r=ecm
  17. By: Franses, Ph.H.B.F.; Janssens, E.
    Abstract: The Principal Component Regression is often used to forecast macroeconomic variables when there are many predictors. In this letter, we argue that it makes sense to pre-whiten the predictors before including these in a PCR. With simulation experiments, we show that without such pre-whitening, spurious principal components can appear, and that these can become spuriously significant in a PCR. With an illustration to annual inflation rates for five African countries, we show that non-spurious principal components can be genuinely relevant in empirical forecasting models.
    Keywords: Principal Component Regression, Pre-whitening, Spurious Regressions
    JEL: C52
    Date: 2017–11–01
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:102704&r=ecm
  18. By: Andrew Binning (Norges Bank (Central Bank of Norway)); Junior Maih (Norges Bank (Central Bank of Norway) and BI Norwegian Business School)
    Abstract: Occasionally binding constraints are part of the economic landscape: for instance recent experience with the global financial crisis has highlighted the gravity of the lower bound constraint on interest rates; mortgagors are subject to more stringent borrowing conditions when credit growth has been excessive or there is a downturn in the economy. In this paper we take four common examples of occasionally binding constraints in economics and demonstrate how to use regime-switching to incorporate them into DSGE models. In particular we investigate the zero lower bound constraint on interest rates, occasionally binding collateral constraints, downward nominal wage rigidities and irreversible investment. We compare our approach against some well-known methods for solving occasionally-binding constraints. We demonstrate the versatility of our regime-switching approach by combining multiple occasionally binding constraints to a model solved using higher-order perturbation methods, a feat that is difficult to achieve using alternative methodologies.
    Keywords: Occasionally Binding Constraints, DSGE models, ZLB, Collateral Constraints
    Date: 2017–11–14
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2017_23&r=ecm

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.