nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒02‒19
twenty-one papers chosen by
Sune Karlsson
Örebro universitet

  1. DSGE-based Priors for BVARs & Quasi-Bayesian DSGE Estimation By Filippeli, Thomai; Harrison, Richard; Theodoridis, Konstantinos
  2. Robust and bias-corrected estimation of the probability of extreme failure sets By Christophe Dutang; Yuri Goegebeur; Armelle Guillou
  3. A Robust Sequential Procedure for Estimating the Number of Structural Changes in Persistence By Mohitosh Kejriwal
  4. Time Varying Heteroskedastic Realized GARCH models for tracking measurement error bias in volatility forecasting By Gerlach, Richard; Naimoli, Antonio; Storti, Giuseppe
  5. Probability Based Independence Sampler for Bayesian Quantitative Learning in Graphical Log-Linear Marginal Models By Ioannis Ntzoufras; Claudia Tarantola; Monia Lupparelli
  6. Functional GARCH models: the quasi-likelihood approach and its applications By Cerovecki, Clément; Francq, Christian; Hormann, Siegfried; Zakoian, Jean-Michel
  7. Consistent non-Gaussian pseudo maximum likelihood estimators By Gabriele Fiorentini; Enrique Sentana
  8. Asymptotics of Cholesky GARCH models and time-varying conditional betas By Darolles, Serges; Francq, Christian; Laurent, Sébastien
  9. Parametric estimation of hidden Markov models by least squares type estimation and deconvolution By Christophe Chesneau; Salima El Kolei; Fabien Navarro
  10. Identification and Estimation of Dynamic Causal Effects in Macroeconomics Using External Instruments By James H. Stock; Mark W. Watson
  11. Improving approximate Bayesian computation via quasi Monte Carlo By Alexander Buchholz; Nicolas CHOPIN
  12. Dynamic panel probit: finite-sample performance of alternative random-effects estimators By Riccardo Lucchetti; Claudia Pigini
  13. Sparse covariance matrix estimation in high-dimensional deconvolution By Denis Belomestny; Mathias Trabs; Alexandre Tsybakov
  14. Robust machine learning by median-of-means : theory and practice By Guillaume Lecué; Mathieu Lerasle
  15. Encompassing tests for evaluating multi-step system forecasts invariant to linear transformations By Håvard Hungnes
  16. Simultaneous Dimension Reduction and Clustering via the NMF-EM Algorithm By Léna CAREL; Pierre ALQUIER
  17. Learning from MOM’s principles : Le Cam’s approach By Guillaume Lecué; Mathieu Lerasle
  18. Structured Matrix Estimation and Completion By Olga Klopp; Yu Lu; Alexandre B. Tsybakov; Harrison H. Zhou
  19. Towards the study of least squares Estimators with convex penalty By Pierre C. Bellec; Guillaume Lecué; Alexandre Tsybakov
  20. Stuctured Matrix Estimation and Completion By Olga Klopp; Yu Lu; Alexandre Tsybakov; Harrison H. Zhou
  21. Structural vector autoregression with time varying transition probabilities: identifying uncertainty shocks via changes in volatility By Wenjuan Chen; Aleksei Netsunajev

  1. By: Filippeli, Thomai (Queen Mary University); Harrison, Richard (Bank of England); Theodoridis, Konstantinos (Cardiff Business School)
    Abstract: We present a new method for estimating Bayesian vector autoregression (VAR) models using priors from a dynamic stochastic general equilibrium (DSGE) model. We use the DSGE model priors to determine the moments of an independent Normal-Wishart prior for the VAR parameters. Two hyper-parameters control the tightness of the DSGE-implied priors on the autoregressive coefficients and the residual covariance matrix respectively. Determining these hyper-parameters by selecting the values that maximize the marginal likelihood of the Bayesian VAR provides a method for isolating subsets of DSGE parameter priors that are at odds with the data. We illustrate the ability of our approach to correctly detect incorrect DSGE priors for the variance of structural shocks using a Monte Carlo experiment. We also demonstrate how posterior estimates of the DSGE parameter vector can be recovered from the BVAR posterior estimates: a new Ôquasi-BayesianÕ DSGE estimation. An empirical application on US data reveals economically meaningful differences in posterior parameter estimates when comparing our quasi-Bayesian estimator with Bayesian maximum likelihood. Our method also indicates that the DSGE prior implications for the residual covariance matrix are at odds with the data.
    Keywords: BVAR, SVAR, DSGE, DSGE-VAR, Gibbs Sampling, Marginal Likelihood Evaluation, Predictive Likelihood Evalution, Quasi-Bayesian DSGE Estimation
    JEL: C11 C13 C32 C52
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2018/5&r=ecm
  2. By: Christophe Dutang (LMM - Laboratoire Manceau de Mathématiques - UM - Le Mans Université); Yuri Goegebeur (IMADA - Department of Mathematics and Computer Science [Odense] - University of Southern Denmark); Armelle Guillou (IRMA - Institut de Recherche Mathématique Avancée - Université de Strasbourg - CNRS - Centre National de la Recherche Scientifique)
    Abstract: In multivariate extreme value statistics, the estimation of probabilities of extreme failure sets is an important problem, with practical relevance for applications in several scientific disciplines. Some estimators have been introduced in the literature, though so far the typical bias issues that arise in application of extreme value methods and the non-robustness of such methods with respect to outliers were not addressed. We introduce a bias-corrected and robust estimator for small tail probabilities. The estimator is obtained from a second order model that is fitted to properly transformed bivariate observations by means of the minimum density power divergence technique. The asymptotic properties are derived under some mild regularity conditions and the finite sample performance is evaluated through an extensive simulation study. We illustrate the practical applicability of the method on a dataset from the actuarial context.
    Keywords: tail quantile process ,failure set,bias-correction,tail dependence,robustness
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-01616187&r=ecm
  3. By: Mohitosh Kejriwal
    Abstract: This paper proposes a new procedure for estimating the number of structural changes in the persistence of a univariate time series. In contrast to the extant literature that primarily assumes (regime-wise) stationarity, our framework also allows the underlying stochastic process to switch between stationary [I(0)] and unit root [I(1)] regimes. We develop a sequential testing approach based on the simultaneous application of two Wald-type tests for structural change, one of which assumes the process is I(0)-stable under the null hypothesis while the other assumes the stable I(1) model as the null hypothesis. This feature allows the procedure to maintain correct asymptotic size regardless of whether the regimes are I(0) or I(1). We also propose a novel procedure for distinguishing processes with pure level and/or trend shifts from those that are also subject to concurrent shifts in persistence. The large sample properties of the recommended procedures are derived and the relevant asymptotic critical values tabulated. Our Monte Carlo experiments demonstrate that the advocated approach compares favorably relative to the commonly employed approach based on I(0) sequential testing, especially when the data contain an I(1) segment.
    Keywords: multiple structural changes, unit root, stationary, sequential procedure, Wald tests
    JEL: C22
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:pur:prukra:1303&r=ecm
  4. By: Gerlach, Richard; Naimoli, Antonio; Storti, Giuseppe
    Abstract: This paper proposes generalisations of the Realized GARCH model by Hansen et al. (2012), in three different directions. First, heteroskedasticity in the noise term in the measurement equation is allowed, since this is generally assumed to be time-varying as a function of an estimator of the Integrated Quarticity for intra-daily returns. Second, in order to account for attenuation bias effects, the volatility dynamics are allowed to depend on the accuracy of the realized measure. This is achieved by letting the response coefficient of the lagged realized measure depend on the time-varying variance of the volatility measurement error, thus giving more weight to lagged volatilities when they are more accurately measured. Finally, a further extension is proposed by introducing an additional explanatory variable into the measurement equation, aiming to quantify the bias due to effect of jumps and measurement errors.
    Keywords: Realized Volatility, Realized GARCH, Measurement Error, Realized Quarticity
    JEL: C22 C53 C58
    Date: 2018–01–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:83893&r=ecm
  5. By: Ioannis Ntzoufras (Department of Statistics, Athens University of Economics and Business); Claudia Tarantola (Department of Economics and Management, University of Pavia); Monia Lupparelli (Department of Statistical Sciences, University of Bologna)
    Abstract: Bayesian methods for graphical log-linear marginal models has not been developed in the same extend as traditional frequentist approaches. In this work, we introduce a novel Bayesian approach for quantitative learning for such models. They belong to curved exponential families that are difficult to handle from a Bayesian perspective. Furthermore, the likelihood cannot be analytically expressed as a function of the marginal log-linear interactions, but only in terms of cell counts or probabilities. Posterior distributions cannot be directly obtained, and MCMC methods are needed. Finally, a well-defined model requires parameter values that lead to compatible marginal probabilities. Hence, any MCMC should account for this important restriction. We construct a fully automatic and efficient MCMC strategy for quantitative learning for graphical log-linear marginal models that handles these problems. While the prior is expressed in terms of the marginal log-linear interactions, we build an MCMC algorithm which employs a proposal on the probability parameter space. The corresponding proposal on the marginal log-linear interactions is obtained via parameter transformations. By this strategy, we achieve to move within the desired target space. At each step we directly work with well-defined probability distributions. Moreover, we can exploit a conditional conjugate setup to build an efficient proposal on probability parameters. The proposed methodology is illustrated by a simulation study and a real dataset.
    Keywords: Graphical Models, Marginal Log-Linear Parameterisation, Markov Chain Monte Carlo Computation.
    JEL: E12 E21 E22 E24 E31 C32
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0149&r=ecm
  6. By: Cerovecki, Clément; Francq, Christian; Hormann, Siegfried; Zakoian, Jean-Michel
    Abstract: The increasing availability of high frequency data has initiated many new research areas in statistics. Functional data analysis (FDA) is one such innovative approach towards modelling time series data. In FDA, densely observed data are transformed into curves and then each (random) curve is considered as one data object. A natural, but still relatively unexplored, context for FDA methods is related to financial data, where high-frequency trading currently takes a significant proportion of trading volumes. Recently, articles on functional versions of the famous ARCH and GARCH models have appeared. Due to their technical complexity, existing estimators of the underlying functional parameters are moment based---an approach which is known to be relatively inefficient in this context. In this paper, we promote an alternative quasi-likelihood approach, for which we derive consistency and asymptotic normality results. We support the relevance of our approach by simulations and illustrate its use by forecasting realised volatility of the S$\&$P100 Index.
    Keywords: Functional time series; High-frequency volatility models; Intraday returns; Functional QMLE; Stationarity of functional GARCH
    JEL: C13 C32 C58
    Date: 2018–01–18
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:83990&r=ecm
  7. By: Gabriele Fiorentini (University of Florence); Enrique Sentana (CEMFI, Madrid)
    Abstract: We characterise the mean and variance parameters that distributionally misspecified maximum likelihood estimators can consistently estimate in multivariate conditionally heteroskedastic dynamic regression models. We also provide simple closed-form consistent estimators for the rest. The inclusion of means and the explicit coverage of multivariate models make our procedures useful not only for GARCH models but also in many empirically relevant macro and finance applications involving VARs and multivariate regressions. We study the statistical properties of our proposed consistent estimators, as well as their efficiency relative to Gaussian pseudo maximum likelihood procedures. Finally, we provide finite sample results through Monte Carlo simulations.
    Keywords: Consistency, Efficiency, Misspecification
    JEL: C13 C22 C32 C51
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:fir:econom:wp2018_01&r=ecm
  8. By: Darolles, Serges; Francq, Christian; Laurent, Sébastien
    Abstract: This paper proposes a new model with time-varying slope coefficients. Our model, called CHAR, is a Cholesky-GARCH model, based on the Cholesky decomposition of the conditional variance matrix introduced by Pourahmadi (1999) in the context of longitudinal data. We derive stationarity and invertibility conditions and prove consistency and asymptotic normality of the Full and equation-by-equation QML estimators of this model. We then show that this class of models is useful to estimate conditional betas and compare it to the approach proposed by Engle (2016). Finally, we use real data in a portfolio and risk management exercise. We find that the CHAR model outperforms a model with constant betas as well as the dynamic conditional beta model of Engle (2016).
    Keywords: Multivariate-GARCH; conditional betas; covariance
    JEL: C5 C58
    Date: 2018–01–18
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:83988&r=ecm
  9. By: Christophe Chesneau (Université de Caen; LMNO); Salima El Kolei (CREST;ENSAI); Fabien Navarro (CREST; ENSAI)
    Abstract: In this paper, we study a speci?c hidden Markov chain de?ned by the equation: Yi = Xi + ei, i = 1,...,n + 1, where (Xi)i=1 is a real-valued stationary Markov chain and (ei)i=1 is a noise independent of (Xi)i=1. We develop a new parametric approach obtained by minimization of a particular contrast taking advantage of the regressive problem and based on deconvolution strategy. We provide theoretical guarantees on the performance of the resulting estimator; its consistency and its asymptotic normality are established.
    Keywords: Contrast function; deconvolution; least square estimation; parametric inference; stochastic volatility
    Date: 2017–09–30
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-66&r=ecm
  10. By: James H. Stock; Mark W. Watson
    Abstract: An exciting development in empirical macroeconometrics is the increasing use of external sources of as-if randomness to identify the dynamic causal effects of macroeconomic shocks. This approach – the use of external instruments – is the time series counterpart of the highly successful strategy in microeconometrics of using external as-if randomness to provide instruments that identify causal effects. This lecture exposits this approach and provides conditions on instruments and control variables under which external instrument methods produce valid inference on dynamic causal effects, that is, structural impulse response functions. These conditions can help guide the search for valid instruments in applications. We consider two methods, a one-step instrumental variables regression and a two-step method that entails estimation of a vector autoregression. Under a restrictive instrument validity condition, the onestep method is valid even if the vector autoregression is not invertible, so comparing the two estimates provides a test of invertibility. Under a less restrictive condition, where multiple lagged endogenous variables are needed as control variables in the one-step method, the conditions for validity of the two methods are the same.
    JEL: C36 E17
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:24216&r=ecm
  11. By: Alexander Buchholz (CREST-ENSAE); Nicolas CHOPIN (CREST-ENSAE)
    Abstract: ABC (approximate Bayesian computation) is a general approach for dealing with models with an intractable likelihood. In this work, we derive ABC algorithms based on QMC (quasi-Monte Carlo) sequences. We show that the resulting ABC estimates have a lower variance than their Monte Carlo counter-parts. We also develop QMC variants of sequential ABC algorithms, which progressively adapt the proposal distribution and the acceptance threshold. We illustrate our QMC approach through several examples taken from the ABC literature.
    Keywords: Approximate Bayesian computation, Likelihood-free inference, Quasi Monte Carlo, Randomized Quasi Monte Carlo, Adaptive importance sampling
    Date: 2017–10–17
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-37&r=ecm
  12. By: Riccardo Lucchetti (Di.S.E.S. - Universita' Politecnica delle Marche); Claudia Pigini (Di.S.E.S. - Universita' Politecnica delle Marche)
    Abstract: Estimation of random-effects dynamic probit models for panel data entails the so-called "initial conditions problem". We argue that the relative finitesample performance of the two main competing solutions is driven by the magnitude of the individual unobserved heterogeneity and/or of the state dependence in the data. We investigate our conjecture by means of a comprehensive Monte Carlo experiment and offer useful indications for the practitioner.
    Keywords: Dynamic panel probit; panel data; Monte Carlo study
    JEL: C23 C25
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:anc:wpaper:426&r=ecm
  13. By: Denis Belomestny (Duisburg-Essen University, Faculty of Mathematics, National Research University Higher School of Economics); Mathias Trabs (Universität Hamburg; Faculty of Mathematics); Alexandre Tsybakov (CREST;ENSAE)
    Abstract: We study the estimation of the covariance matrix _ of a p-dimensional normal random vector based on n independent observations corrupted by additive noise. Only a general nonparametric assumption is imposed on the distribution of the noise without any sparsity constraint on its covariance matrix. In this high-dimensional semiparametric deconvolution problem, we propose spectral thresholding estimators that are adaptive to the sparsity of _. We establish an oracle inequality for these estimators under model missspecification and derive non-asymptotic minimax convergence rates that are shown to be logarithmic in log p/n. We also discuss the estimation of low-rank matrices based on indirect observations as well as the generalization to elliptical distributions. The finite sample performance of the threshold estimators is illustrated in a numerical example. ;Classification-JEL: Primary 62H12; secondary 62F12, 62G05
    Keywords: Thresholding, minimax convergence rates, Fourier methods, severely ill-posed inverse problem
    Date: 2017–10–30
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-25&r=ecm
  14. By: Guillaume Lecué (CREST; CNRS; Université Paris Saclay); Mathieu Lerasle (CNRS,département de mathématiques d’Orsay)
    Abstract: We introduce new estimators for robust machine learning based on median-of-means (MOM) estimators of the mean of real valued random variables. These estimators achieve optimal rates of convergence under minimal assumptions on the dataset. The dataset may also have been corrupted by outliers on which no assumption is granted. We also analyze these new estimators with standard tools from robust statistics. In particular, we revisit the concept of breakdown point. We modify the original definition by studying the number of outliers that a dataset can contain without deteriorating the estimation properties of a given estimator. This new notion of breakdown number, that takes into account the statistical performances of the estimators, is non-asymptotic in nature and adapted for machine learning purposes. We proved that the breakdown number of our estimator is of the order of number of observations * rate of convergence. For instance, the breakdown number of our estimators for the problem of estimation of a d-dimensional vector with a noise variance a² is a²d and it becomes a²s log(ed/s) when this vector has only s non-zero component. Beyond this breakdown point, we proved that the rate of convergence achieved by our estimator is number of outliers divided by number of observations. Besides these theoretical guarantees, the major improvement brought by these new estimators is that they are easily computable in practice. In fact, basically any algorithm used to approximate the standard Empirical Risk Minimizer (or its regularized versions) has a robust version approximating our estimators. On top of being robust to outliers, the "MOM version" of the algorithms are even faster than the original ones, less demanding in memory resources in some situations and well adapted for distributed datasets which makes it particularly attractive for large dataset analysis. As a proof of concept, we study many algorithms for the classical LASSO estimator. It turns out that the original algorithm can be improved a lot in practice by randomizing the blocks on which \local means" are computed at each step of the descent algorithm. A byproduct of this modification is that our algorithms come with a measure of depth of data that can be used to detect outliers, which is another major issue in Machine learning.
    Date: 2017–11–01
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-32&r=ecm
  15. By: Håvard Hungnes (Statistics Norway)
    Abstract: The paper suggests two encompassing tests for evaluating multi-step system forecasts invariant to linear transformations. An invariant measure for forecast accuracy is necessary as the conclusions otherwise can depend on how the forecasts are reported (e.g., as in level or growth rates). Therefore, a measure based on the prediction likelihood of the forecast for all variables at all horizons is used. Both tests are based on a generalization of the encompassing test for univariate forecasts where potential heteroscedasticity and autocorrelation in the forecasts are considered. The tests are used in evaluating quarterly multi-step system forecasts made by Statistics Norway.
    Keywords: Macroeconomic forecasts; Econometric models; Forecast performance; Forecast evaluation; Forecast comparison
    JEL: C32 C53
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:ssb:dispap:871&r=ecm
  16. By: Léna CAREL (CREST-ENSAE); Pierre ALQUIER (CREST-ENSAE)
    Abstract: Mixture models are among the most popular tools for model based clustering. However, when the dimension and the number of clusters is large, the estimation as well as the interpretation of the clusters become challenging. We propose a reduced-dimension mixture model, where the K components parameters are combinations of words from a small dictionary - say H words with H
    Date: 2017–09–11
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-38&r=ecm
  17. By: Guillaume Lecué (CREST; CNRS; Université Paris Saclay); Mathieu Lerasle (Laboratoire de Mathématiques d’Orsay; Université Paris Sud; CNRS; Université Paris Saclay)
    Abstract: We obtain estimation error rates for estimators obtained by aggregation of regularized median-of-means tests, following a construction of Le Cam. The results hold with exponentially large probability, under only weak moments assumptions on data. Any norm may be used for regularization. When it has some sparsity inducing power we recover sparse rates of convergence. The procedure is robust since a large part of data may be corrupted, these outliers have nothing to do with the oracle we want to reconstruct. Our general risk bound is of order max (minimax rate in the i.i.d. setup; number of outliers/number of observations) In particular, the number of outliers may be as large as (number of data) X(minimax rate) without affecting this rate. The other data do not have to be identically distributed but should only have equivalent L1 and L2 moments. For example, the minimax rate s log(ed/s)=N of recovery of a s-sparse vector in Rd is achieved with exponentially large probability by a median-of-means version of the LASSO when the noise has q0 moments for some q0 > 2, the entries of the design matrix should have C0 log(ed) moments and the dataset can be corrupted up to C1s log(ed/s) outliers. ;Classification-JEL: 62G35; 62G08
    Keywords: robust statistics; statistical learning; high dimensional statistics
    Date: 2017–07–01
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-28&r=ecm
  18. By: Olga Klopp (ESSEC Business School ; CREST); Yu Lu (Yale University); Alexandre B. Tsybakov (ENSAE, UMR CNRS 9194); Harrison H. Zhou (Yale University)
    Abstract: We study the problem of matrix estimation and matrix completion for matrices with general clustering structure. We consider an unified model which includes as particular cases gaussian mixture model, mixed membership model, bi-clustering model and dictionary learning. For this general model we obtain the optimal convergence rates in a minimax sense for estimation of the signal matrix under the Frobenius norm and under the spectral norm. As a consequence of our general result we recover minimax optimal rates of convergence for the special models mentioned before.
    Keywords: matrix completion, matrix estimation, minimax optimality
    Date: 2017–09–15
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-43&r=ecm
  19. By: Pierre C. Bellec (Rutgers University); Guillaume Lecué (CREST;ENSAE); Alexandre Tsybakov (CREST;ENSAE)
    Abstract: Penalized least squares estimation is a popular technique in high-dimensional statistics. It includes such methods as the LASSO, the group LASSO, and the nuclear norm penalized least squares. The existing theory of these methods is not fully satisfying since it allows one to prove oracle inequalities with fixed high probability only for the estimators depending on this probability. Furthermore, the control of compatibility factors appearing in the oracle bounds is often not explicit. Some very recent developments suggest that the theory of oracle inequalities can be revised in an improved way. In this paper, we provide an overview of ideas and tools leading to such an improved theory. We show that, along with overcoming the disadvantages mentioned above, the methodology extends to the hilbertian framework and it applies to a large class of convex penalties. This paper is partly expository. In particular, we provide adapted proofs of some results from other recent work.
    Date: 2017–07–07
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-23&r=ecm
  20. By: Olga Klopp (ESSEC;CREST); Yu Lu (Yale University); Alexandre Tsybakov (ENSAE;CNRS); Harrison H. Zhou
    Abstract: We study the problem of matrix estimation and matrix completion under a general framework. This framework includes several important models as special cases such as the gaussian mixture model, mixed membership model, bi-clustering model and dictionary learning. We consider the optimal convergence rates in a minimax sense for estimation of the signal matrix under the Frobenius norm and under the spectral norm. As a consequence of our general result we obtain minimax optimal rates of convergence for various special models. ;Classification-JEL: 62J99, 62H12, 60B20, 15A83
    Keywords: matrix completion, matrix estimation, minimax optimality
    Date: 2017–07–10
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-24&r=ecm
  21. By: Wenjuan Chen; Aleksei Netsunajev
    Keywords: structural vector autoregression; Markov switching; time varying transition probabilities; identification via heteroscedasticity; uncertainty shocks; unemployment dynamics
    JEL: C32 D80 E24
    Date: 2018–02–13
    URL: http://d.repec.org/n?u=RePEc:eea:boewps:wp2018-2&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.