nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒12‒29
35 papers chosen by
Sune Karlsson
Orebro University

  1. Inference on Co-integration Parameters in Heteroskedastic Vector Autoregressions By H. Peter Boswijk; Giuseppe Cavaliere; Anders Rahbek; A.M. Robert Taylor
  2. Robust estimation of the Pareto index: A Monte Carlo Analysis By Michał Brzeziński
  3. Oracle Inequalities for Convex Loss Functions with Non-Linear Targets By Mehmet Caner; Anders Bredahl Kock
  4. Individual and time effects in nonlinear panel models with large N,T By Iván Fernández-Val; Martin Weidner
  5. Nonparametric Estimation of Cumulative Incidence Functions for Competing Risks Data with Missing Cause of Failure By Georgios Effraimidis; Christian M. Dahl
  6. Covariate selection and model averaging in semiparametric estimation of treatment effects By Toru Kitagawa; Chris Muris
  7. Asymptotic Inference about Predictive Accuracy Using High Frequency Data By Jia Li; Andrew J. Patton
  8. Testing for Factor Loading Structural Change under Common Breaks By YAMAMOTO, Yohei; TANAKA, Shinya
  9. Endogenous Attrition in Panels By Laurent Davezies; Xavier d'Haultfoeuille
  10. A new Pearson-type QMLE for conditionally heteroskedastic models By Zhu, Ke; Li, Wai Keung
  11. Adaptive trend estimation in financial time series via multiscale change-point-induced basis recovery By Schröder, Anna Louise; Fryzlewicz, Piotr
  12. Bias Correction of Persistence Measures in Fractionally Integrated Models By Simone D. Grose; Gael M. Martin; Donald S. Poskitt
  13. Pivotal estimation via square-root lasso in nonparametric regression By Alexandre Belloni; Victor Chernozhukov; Lie Wang
  14. Psychology in econometric models: conceptual and methodological foundations By Thum, Anna-Elisabeth
  15. Uniform Consistency of Nonstationary Kernel-Weighted Sample Covariances for Nonparametric Regression By Degui Li; Peter C. B. Phillips; Jiti Gao
  16. Sharp Bounds on Heterogeneous Individual Treatment Responses By Lee, Jinhyun
  17. Dynamic Copula Models and High Frequency Data By Irving Arturo De Lira Salvatierra; Andrew J. Patton
  18. On the identification of fractionally cointegrated VAR models with the F(d) condition By Federico Carlini; Paolo Santucci de Magistris
  19. Regime Switching Stochastic Volatility with Skew, Fat Tails and Leverage using Returns and Realized Volatility Contemporaneously By Trojan, Sebastian
  20. Distributional vs. Quantile Regression By Roger Koenker; Samantha Leorato; Franco Peracchi
  21. Two-dimensional smoothing of mortality rates By Alexander Dokumentov; Rob J Hyndman
  22. Likelihood inference in non-linear term structure models: the importance of the lower bound By Andreasen, Martin; Meldrum, Andrew
  23. Symmetry and Separability in Two-Country Cointegrated VAR Models: Representation and Testing By Hans-Martin Krolzig; Reinhold Heinlein
  24. The Causal Effect of Deficiency at English on Female Immigrants' Labor Market Outcomes in the UK By Miranda, Alfonso; Zhu, Yu
  25. Endogenous Stratification in Randomized Experiments By Alberto Abadie; Matthew M. Chingos; Martin R. West
  26. Long Term Care and Longevity By Christian Gourieroux; Yang Lu
  27. Inference on Self-Exciting Jumps in Prices and Volatility using High Frequency Measures By Worapree Maneesoonthorn; Catherine S. Forbes; Gael M. Martin
  28. Time-Varying Systemic Risk: Evidence from a Dynamic Copula Model of CDS Spreads By Dong Hwan Oh; Andrew J. Patton
  29. A model for estimation of the demand for on-street parking By Madsen, Edith; Mulalic, Ismir; Pilegaard, Ninette
  30. Intergenerational Mobility and the Informative Content of Surnames By Guell, Maia; Mora, Jose V. Rodriguez; Telmer, Christopher I.
  31. Efficient Jacobian evaluations for estimating zero lower bound term structure models By Leo Krippner
  32. Inference Based on SVARs Identied with Sign and Zero Restrictions: Theory and Applications By Jonas E. Arias; Juan Rubio-Ramirez; Daniel F. Waggoner
  33. Estimating time-changes in noisy L\'evy models By Adam D. Bull
  34. Block Sampling under Strong Dependence By Ting Zhang; Hwai-Chung Ho; Martin Wendler; Wei Biao Wu
  35. Monetary Policy and Exchange Rates: A Balanced Two-Country Cointegrated VAR Model Approach By Reinhold Heinlein; Hans-Martin Krolzig

  1. By: H. Peter Boswijk (University of Amsterdam, Amsterdam School of Economics, Tinbergen Institute); Giuseppe Cavaliere (Department of Statistics, University of Bologna); Anders Rahbek (Department of Statistics and Operations Research, Copenhagen University); A.M. Robert Taylor (University of Essex)
    Abstract: It is well established that the shocks driving many key macro-economic and financial variables display time-varying volatility. In this paper we consider estimation and hypothesis testing on the coefficients of the co-integrating relations and the adjustment coefficients in vector autoregressions driven by both conditional and unconditional heteroskedasticity of a quite general and unknown form in the shocks. We show that the conventional results in Johansen (1996) for the maximum likelihood estimators and associated likelihood ratio tests derived under homoskedasticity do not in general hold in the presence of heteroskedasticity. As a consequence, standard confidence intervals and tests of hypothesis on these coefficients are potentially unreliable. Solutions to this inference problem based on Wald tests (using a "sandwich" estimator of the variance matrix) and on the use of the wild bootstrap are discussed. These do not require the practitioner to specify a parametric model for volatility, or to assume that the pattern of volatility is common to, or independent across, the vector of series under analysis. We formally establish the conditions under which these methods are asymptotically valid. A Monte Carlo simulation study demonstrates that significant improvements in finite sample size can be obtained by the bootstrap over the corresponding asymptotic tests in both heteroskedastic and homoskedastic environments. An application to the term structure of interest rates in the US illustrates the difference between standard and bootstrap inferences regarding hypotheses on the co-integrating vectors and adjustment coefficients.
    Keywords: Co-integration, adjustment coefficients, (un)conditional heteroskedasticity, heteroskedasticity-robust inference, wild bootstrap
    JEL: C30 C32
    Date: 2013–11–14
    URL: http://d.repec.org/n?u=RePEc:kud:kuiedp:1313&r=ecm
  2. By: Michał Brzeziński (Faculty of Economic Sciences, University of Warsaw)
    Abstract: The Pareto distribution is often used in many areas of economics to model the right tail of heavy-tailed distributions. However, the standard method of estimating the shape parameter (the Pareto index) of this distribution– the maximum likelihood estimator (MLE) – is non-robust, in the sense that it is very sensitive to extreme observations, data contamination or model deviation. In recent years, a number of robust estimators for the Pareto index have been proposed, which correct the deficiency of the MLE. However, little is known about the performance of these estimators in small-sample setting, which often occurs in practice. This paper investigates the small-sample properties of the most popular robust estimators for the Pareto index, including the optimal B-robust estimator (OBRE) (Victoria-Feser and Ronchetti, 1994, The Canadian Journal of Statistics 22: 247–258), the weighted maximum likelihood estimator (WMLE) (Dupuis and Victoria-Feser, 2006, Canadian Journal of Statistics 34: 639–658), the generalized median estimator (GME) (Brazauskas and Serfling, 2001a, Extremes 3, 231–249), the partial density component estimator (PDCE) (Vandewalle et al., 2007, Computational Statistics & Data Analysis 51: 6252–6268), and the probability integral transform statistic estimator (PITSE) (Finkelstein et al., 2006, North American Actuarial Journal 10, 1–10). Monte Carlo simulations show that the PITSE offers the desired compromise between ease of use and power to protect against outliers in the small-sample setting.
    Keywords: Pareto distribution, Pareto index, power-law distribution, robust estimation, Monte Carlo simulation, small-sample performance
    JEL: C46 C15
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2013-32&r=ecm
  3. By: Mehmet Caner (North Carolina State University); Anders Bredahl Kock (Aarhus University and CREATES)
    Abstract: This paper consider penalized empirical loss minimization of convex loss functions with unknown non-linear target functions. Using the elastic net penalty we establish a finite sample oracle inequality which bounds the loss of our estimator from above with high probability. If the unknown target is linear this inequality also provides an upper bound of the estimation error of the estimated parameter vector. These are new results and they generalize the econometrics and statistics literature. Next, we use the non-asymptotic results to show that the excess loss of our estimator is asymptotically of the same order as that of the oracle. If the target is linear we give sufficient conditions for consistency of the estimated parameter vector. Next, we briefly discuss how a thresholded version of our estimator can be used to perform consistent variable selection. We give two examples of loss functions covered by our framework and show how penalized nonparametric series estimation is contained as a special case and provide a finite sample upper bound on the mean square error of the elastic net series estimator.
    Keywords: Empirical loss minimization, Lasso, Elastic net, Oracle inequality, Convex loss function, Nonparametric estimation, Variable selection.
    JEL: C13 C21 C31
    Date: 2013–12–13
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-51&r=ecm
  4. By: Iván Fernández-Val (Institute for Fiscal Studies and Boston University); Martin Weidner (Institute for Fiscal Studies and UCL)
    Abstract: Fixed effects estimators of nonlinear panel data models can be severely biased because of the well-known incidental parameter problem. We develop analytical and jackknife bias corrections for nonlinear models with both individual and time effects. Under asymptotic sequences where the time-dimension (T) grows with the cross-sectional dimension (N), the time effects introduce additional incidental parameter bias. As the existing bias corrections apply to models with only individual effects, we derive the appropriate corrections for the case when both effects are present. The basis for the corrections are general asymptotic expansions of fixed effects estimators with incidental parameters in multiple dimensions. We apply the expansions to M-estimators with concave objective functions in parameters for panel models with additive individual and time effects. These estimators cover fixed effects estimators of the most popular limited dependent variable models such as logit, probit, ordered probit, Tobit and Poisson models. Our analysis therefore extends the use of large-T bias adjustments to an important class of models. We also develop bias corrections for functions of the data, parameters and individual and time effects including average partial effects. In this case, the incidental parameter bias can be asymptotically of second order, but the corrections still improve finite-sample properties.
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:60/13&r=ecm
  5. By: Georgios Effraimidis (University of Southern Denmark); Christian M. Dahl (University of Southern Denmark and CREATES)
    Abstract: In this paper, we develop a fully nonparametric approach for the estimation of the cumulative incidence function with Missing At Random right-censored competing risks data. We obtain results on the pointwise asymptotic normality as well as the uniform convergence rate of the proposed nonparametric estimator. A simulation study that serves two purposes is provided. First, it illustrates in details how to implement our proposed nonparametric estimator. Secondly, it facilitates a comparison of the nonparametric estimator to a parametric counterpart based on the estimator of Lu and Liang (2008). The simulation results are generally very encouraging.
    Keywords: Cumulative incidence function; Inverse probability weighting; Kernel estimation; Local linear estimation; Martingale central limit theorem
    JEL: C14 C41
    Date: 2013–12–15
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-50&r=ecm
  6. By: Toru Kitagawa (Institute for Fiscal Studies and University College London); Chris Muris
    Abstract: In the practice of program evaluation, choosing the covariates and the functional form of the propensity score is an important choice for estimating treatment effects. This paper proposes data-driven model selection and model averaging procedures that address this issue for the propensity score weighting estimation of the average treatment effects for treated (ATT). Building on the focussed information criterion (FIC), the proposed selection and averaging procedures aim to minimize the estimated mean squared error (MSE) of the ATT estimator in a local asymptotic framework. We formulate model averaging as a statistical decision problem in a limit experiment, and derive an averaging scheme that is Bayes optimal with respect to a given prior for the localisation parameters in the local asymptotic framework. In our Monte Carlo studies, the averaging estimator outperforms the post-covariate-selection estimator in terms of MSE, and shows a substantial reduction in MSE compared to conventional ATT estimators. We apply the procedures to evaluate the effect ot the labour market program described in LaLonde (1986).
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:61/13&r=ecm
  7. By: Jia Li; Andrew J. Patton
    Abstract: This paper provides a general framework that enables many existing inference methods for predictive accuracy to be used in applications that involve forecasts of latent target variables. Such applications include the forecasting of volatility, correlation, beta, quadratic variation, jump variation, and other functionals of an underlying continuous-time process. We provide primitive conditions under which a "negligibility" result holds, and thus the asymptotic size of standard predictive accuracy tests, implemented using a high-frequency proxy for the latent variable, is controlled. An extensive simulation study verifies that the asymptotic results apply in a range of empirically relevant applications, and an empirical application to correlation forecasting is presented.
    Keywords: Forecast evaluation, realized variance, volatility, jumps, semimartingale
    JEL: C53 C22 C58 C52 C32
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:duk:dukeec:13-26&r=ecm
  8. By: YAMAMOTO, Yohei; TANAKA, Shinya
    Abstract: This paper proposes a new test for factor loading structural change in dynamic factor models. The proposed test is robust to the nonmonotonic power problem that occurs if the factor loadings exhibit structural changes at common dates over cross-sections. To illustrate the usefulness of our test, we first show that the leading test proposed by Breitung and Eickmeier (2011) exhibits nonmonotonic power, essentially because the breaks are considered as spurious factors with stable factor loadings. We use both local and non-local asymptotic frameworks to investigate the power of their test. The new test eliminates the effects of the spurious factors by maximizing the test statistic over possible numbers of the original factors. This approach is effective because the original factors are not identified under the alternative hypothesis. Monte Carlo simulations and an empirical example using U.S. Treasury yield curve data clearly illustrate the validity of the asymptotic power analysis and usefulness of the proposed test.
    Keywords: factor model, principal components, common breaks, spurious factors, local alternative asymptotics, fixed alternative asymptotics, nonmonotonic power, yield curve
    JEL: C12 C38
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:hit:econdp:2013-17&r=ecm
  9. By: Laurent Davezies (CREST); Xavier d'Haultfoeuille (CREST)
    Abstract: We consider endogenous attrition in panels where the probability of attrition may depend on current and past outcomes. We show that this probability is nonparametrically identified provided that instruments affecting the outcomes but not directly attrition, and whose distribution is identified, are available. We thus complement Hirano et al. (2001)’s framework, which does not rely on such instruments. Contrary to their approach, neither a refreshment sample nor an additive decomposition on the probability of attrition are needed. We also show that the exclusion restriction has testable implications. We propose an efficient estimation and a test of the exclusion restriction when the outcome and instruments are discrete. The continuous case, which shares some similar features with nonparametric instrumental variable additive models, is also investigated. Finally, we apply our results to the French labor force survey, and provide evidence that attrition is related to transitions on the employment status
    Keywords: Panel data, Endogenous attrition, Instrumental variables
    JEL: C14 C21 C25
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2013-17&r=ecm
  10. By: Zhu, Ke; Li, Wai Keung
    Abstract: This paper proposes a novel Pearson-type quasi maximum likelihood estimator (QMLE) of GARCH($p, q$) models. Unlike the existing Gaussian QMLE, Laplacian QMLE, generalized non-Gaussian QMLE, or LAD estimator, our Pearsonian QMLE (PQMLE) captures not the heavy-tailed but also the skewed innovations. Under the stationarity and weak moment conditions, the strong consistency and asymptotical normality of the PQMLE are obtained. With no further efforts, the PQMLE can apply to other conditionally heteroskedastic models. A simulation study is carried out to assess the performance of the PQMLE. Two applications to eight major stock indexes and four exchange rates further highlight the importance of our new method. To our best knowledge, the heavy-tailed and skewed innovations are observed together in practice, and the PQMLE now gives us a systematical way to capture this co-existing feature.
    Keywords: Asymmetric innovation; Conditionally heteroskedastic model; Exchange rates; GARCH model; Leptokurtic innovation; Non-Gaussian QMLE; Pearson's Type IV distribution; Pearsonian QMLE; Stock indexes.
    JEL: C1 C13 C58
    Date: 2013–12–18
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:52344&r=ecm
  11. By: Schröder, Anna Louise; Fryzlewicz, Piotr
    Abstract: Low-frequency financial returns can be modelled as centered around piecewise-constant trend functions which change at certain points in time. We propose a new stochastic time series framework which captures this feature. The main ingredient of our model is a hierarchically-ordered oscillatory basis of simple piecewise-constant functions. It differs from the Fourier-like bases traditionally used in time series analysis in that it is determined by change-points, and hence needs to be estimated from the data before it can be used. The resulting model enables easy simulation and provides interpretable decomposition of nonstationarity into short- and long-term components. The model permits consistent estimation of the multiscale change-point-induced basis via binary segmentation, which results in a variable-span moving-average estimator of the current trend, and allows for short-term forecasting of the average return.
    Keywords: Financial time series, Adaptive trend estimation, Change-point detection, Binary segmentation, Unbalanced Haar wavelets, Frequency-domain modelling
    JEL: C1 C13 C22 C51 C58 G17
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:52379&r=ecm
  12. By: Simone D. Grose; Gael M. Martin; Donald S. Poskitt
    Abstract: This paper investigates the accuracy of bootstrap-based bias correction of persistence measures for long memory fractionally integrated processes. The bootstrap method is based on the semi-parametric sieve approach, with the dynamics in the long memory process captured by an autoregressive approximation. With a view to improving accuracy, the sieve method is also applied to data pre-filtered by a semi-parametric estimate of the long memory parameter. Both versions of the bootstrap technique are used to estimate the finite sample distributions of the sample autocorrelation coefficients and the impulse response coefficients and, in turn, to bias-adjust these statistics. The accuracy of the resultant estimators in the case of the autocorrelation coefficients is also compared with that yielded by analytical bias adjustment methods when available.
    Keywords: and phrases:Long memory, ARFIMA, sieve bootstrap, bootstrap-based bias correction, sample autocorrelation function, impulse response function.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2013-29&r=ecm
  13. By: Alexandre Belloni; Victor Chernozhukov (Institute for Fiscal Studies and MIT); Lie Wang
    Abstract: We propose a self-tuning √ Lasso method that simultaneiously resolves three important practical problems in high-dimensional regression analysis, namely it handles the unknown scale, heteroscedasticity, and (drastic) non-Gaussianity of the noise. In addition, our analysis allows for badly behaved designs, for example perfectly collinear regressors, and generates sharp bounds even in extreme cases, such as the infinite variance case and the noiseless case, in contrast to Lasso. We establish various non-asymptotic bounds for √ Lasso including prediction norm rate and sharp sparcity bound. Our analysis is based on new impact factors that are tailored to establish prediction rates. In order to cover heteroscedastic non-Gaussian noise, we rely on moderate deviation theory for self-normalized sums to achieve Gaussian-like results under weak conditions. Moreover, we derive bounds on the performance of ordinary least square (ols) applied to the model selected by √ Lasso accounting for possible misspecification of the selected model. Under mild conditions the rate of convergence of ols post √ Lasso is no worse than √ Lasso even with a misspecified selected model and possibly better otherwise. As an application, we consider the use of √ Lasso and post √ Lasso as estimators of nuisance parameters in a generic semi-parametric problem (nonlinear instrumental/moment condition or Z-estimation problem).
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:62/13&r=ecm
  14. By: Thum, Anna-Elisabeth
    Abstract: Personality, ability, trust, motivation and beliefs determine outcomes in life and in particular those of economic nature such as finding a job or earnings. A problem with this type of determinants is that they are not immanently objectively quantifiable and that there is no intrinsic scale - such as in the case of age, years of education or wages. Often we think of these concepts as complex and several items are needed to capture them. In the measurement sense, we dispose of a more or less noisy set of measures, which indirectly express and measure a concept of interest. This way of conceptualizing is used in latent variables modelling. I examine in this article in how far economic and econometric literature can contribute to specifying a framework of how to use latent variables in economic models. As a semiparametric identification strategy for models with endogeneous latent factors I propose to use existing work on identification in the presence of endogeneous variables and examine which additional assumptions are necessary to apply this strategy for models with latent variables. I discuss several estimation strategies and implement a Bayesian Markov Chain Monte Carlo (MCMC) algorithm.
    Keywords: latent variable modelling, identification with endogenous regressors, monte carlo markov chain
    JEL: C11 C14 C38 J24
    Date: 2013–12–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:52293&r=ecm
  15. By: Degui Li; Peter C. B. Phillips; Jiti Gao
    Abstract: We obtain uniform consistency results for kernel-weighted sample covariances in a nonstationary multiple regression framework that allows for both fixed design and random design coefficient variation. In the fixed design case these nonparametric sample covariances have different uniform convergence rates depending on direction, a result that differs fundamentally from the random design and stationary cases. The uniform convergence rates derived are faster than the corresponding rates in the stationary case and confirm the existence of uniform super-consistency. The modelling framework and convergence rates allow for endogeneity and thus broaden the practical econometric import of these results. As a specific application, we establish uniform consistency of nonparametric kernel estimators of the coefficient functions in nonlinear cointegration models with time varying coefficients and provide sharp convergence rates in that case. For the fixed design models, in particular, there are two uniform convergence rates that apply in two different directions, both rates exceeding the usual rate in the stationary case.
    Keywords: and phrases: Cointegration; Functional coefficients; Kernel degeneracy; Nonparametric kernel smoothing; Random coordinate rotation; Super-consistency; Uniform convergence rates; Time varying coefficients.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2013-27&r=ecm
  16. By: Lee, Jinhyun
    Abstract: This paper discusses how to identify individual-specific causal effects of an ordered discrete endogenous variable. The counterfactual heterogeneous causal information is recovered by identifying the partial differences of a structural relation. The proposed refutable nonparametric local restrictions exploit the fact that the pattern of endogeneity may vary across the level of the unobserved variable. The restrictions adopted in this paper impose a sense of order to an unordered binary endogeneous variable. This allows for a uni.ed structural approach to studying various treatment effects when self-selection on unobservables is present. The usefulness of the identi.cation results is illustrated using the data on the Vietnam-era veterans. The empirical findings reveal that when other observable characteristics are identical, military service had positive impacts for individuals with low (unobservable) earnings potential, while it had negative impacts for those with high earnings potential. This heterogeneity would not be detected by average effects which would underestimate the actual effects because different signs would be cancelled out. This partial identification result can be used to test homogeneity in response. When homogeneity is rejected, many parameters based on averages may deliver misleading information.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:edn:sirdps:506&r=ecm
  17. By: Irving Arturo De Lira Salvatierra; Andrew J. Patton
    Abstract: This paper proposes a new class of dynamic copula models for daily asset returns that exploits information from high frequency (intra-daily) data. We augment the generalized autoregressive score (GAS) model of Creal, et al. (2012) with high frequency measures such as realized correlation to obtain a "GRAS" model. We find that the inclusion of realized measures significantly improves the in-sample fit of dynamic copula models across a range of U.S. equity returns. Moreover, we find that out-of-sample density forecasts from our GRAS models are superior to those from simpler models. Finally, we consider a simple portfolio choice problem to illustrate the economic gains from exploiting high frequency data for modeling dynamic dependence.
    Keywords: Realized correlation, realized volatility, dependence, forecasting, tail risk
    JEL: C32 C51 C58
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:duk:dukeec:13-28&r=ecm
  18. By: Federico Carlini (Aarhus University and CREATES); Paolo Santucci de Magistris (Aarhus University and CREATES)
    Abstract: This paper discusses identification problems in the fractionally cointegrated system of Johansen (2008) and Johansen and Nielsen (2012). The identification problem arises when the lag structure is over-specified, such that there exist several equivalent reparametrization of the model associated with different fractional integration and cointegration parameters. The properties of these multiple non-identified sub-models are studied and a necessary and sufficient condition for the identification of the fractional parameters of the system is provided. The condition is named F(d). The assessment of the F(d) condition in the empirical analysis is relevant for the determination of the fractional parameters as well as the lag structure.
    Keywords: Fractional Cointegration; Cofractional Models; Identification; Lag
    JEL: C19 C32
    Date: 2013–11–12
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-44&r=ecm
  19. By: Trojan, Sebastian
    Abstract: A very general stochastic volatility (SV) model specification with leverage, heavy tails, skew and switching regimes is proposed, using realized volatility (RV) as an auxiliary time series to improve inference on latent volatility. Asymmetry in the observation error is modeled by the Generalized Hyperbolic skew Student-t distribution, whose heavy and light tail enable modeling of substantial skewness. The information content of the range and of implied volatility using the VIX index is also investigated. Up to four regimes are identified from S&P 500 index data using RV as additional time series. Resulting number of regimes and dynamics differ dependent on the auxiliary volatility proxy and are investigated in-sample for the financial crash period 2008/09. An out-of-sample study comparing predictive ability of various model variants for a calm and a volatile period yields insights about the gains on forecasting performance that can be expected by incorporating different volatility proxies into the model. Results indicate that including RV pays off mostly in more volatile market conditions, whereas in calmer environments SV specifications using no auxiliary series appear to be the models of choice. Results for the VIX as a measure of implied volatility point in a similar direction. The range as volatility proxy provides a superior in-sample fit, but its predictive performance is found to be weak.
    Keywords: Stochastic volatility, realized volatility, non-Gaussian and nonlinear state space model, Generalized Hyperbolic skew Student-t distribution, mixing distribution, regime switching, Markov chain Monte Carlo, particle filter
    JEL: C11 C15 C32 C58
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:usg:econwp:2013:41&r=ecm
  20. By: Roger Koenker (University of Illinois at Urbana-Champaign); Samantha Leorato (University of Rome "Tor Vergata"); Franco Peracchi (University of Rome "Tor Vergata" and EIEF)
    Abstract: Given a scalar random variable Y and a random vector X defined on the same probability space, the conditional distribution of Y given X can be represented by either the conditional distribution function or the conditional quantile function. To these equivalent representations correspond two alternative approaches to estimation. One approach, distributional regression (DR), is based on direct estimation of the conditional distribution function; the other approach, quantile regression (QR), is instead based on direct estimation of the conditional quantile function. Indirect estimates of the conditional quantile function and the conditional distribution function may then be obtained by inverting the direct estimates obtained from either approach. Despite the growing attention to the DR approach, and the vast literature on the QR approach, the link between the two approaches has not been explored in detail. The aim of this paper is to fill-in this gap by providing a better understanding of the relative performance of the two approaches, both asymptotically and in finite samples, under the linear location model and certain types of heteroskedastic location-scale models.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:eie:wpaper:1329&r=ecm
  21. By: Alexander Dokumentov; Rob J Hyndman
    Abstract: We propose three new practical methods of smoothing mortality rates (the procedure known in demography as graduation) over two dimensions: age and time. The first method uses bivariate thin plate splines. The second uses a similar procedure but with lasso-type regularization. The third method also uses bivariate lasso-type regularization, but allows for both period and cohort effects. Thus the mortality rates are modelled as the sum of four components: a smooth bivariate function of age and time, smooth one-dimensional cohort effects, smooth one-dimensional period effects and random errors. Cross validation is used to compare these new methods of graduation with existing approaches.
    Keywords: Mortality rates, nonparametric smoothing, graduation, cohort effects, period effects.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2013-26&r=ecm
  22. By: Andreasen, Martin (Aarhus University); Meldrum, Andrew (Bank of England)
    Abstract: This paper shows how to use adaptive particle filtering and Markov chain Monte Carlo methods to estimate quadratic term structure models (QTSMs) by likelihood inference. The procedure is applied to a quadratic model for the United States during the recent financial crisis. We find that this model provides a better statistical description of the data than a Gaussian affine term structure model. In addition, QTSMs account perfectly for the lower bound whereas Gaussian affine models frequently imply forecast distributions with negative interest rates. Such predictions appear during the recent financial crisis but also prior to the crisis.
    Keywords: Adaptive particle filtering; Bayesian inference; Higher-order moments; PMCMC; Quadratic term structure models
    JEL: C01 C58 G12
    Date: 2013–12–20
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0481&r=ecm
  23. By: Hans-Martin Krolzig; Reinhold Heinlein
    Abstract: Introducing the approach by Masanao Aoki (1981) to time series econometrics, we show that the dynamics of symmetric linear possibly cointegrated two-country VAR models can be separated into two autonomous subsystems: the country averages and country differences, where the latter includes the exchange rate. The symmetric two-country cointegrated VAR model is synchronized, ie the two countries are driven by the same common trends, if and only if the country-differences subsystem is stable. It is shown that separability carries over even under mild asymmetries such as difference in the size of the countries' economies. The possibilities of a recursive structural VECM representation under symmetry is evaluated. The derived conditions for symmetry and separability are easily testable and applied to nine-dimensional quarterly cointegrated VAR models for five different country pairs in the post-Bretton-Woods era. We find evidence for the symmetry of the cointegration space, which is of practical importance as it allows for the identification of the cointegration vectors in much smaller systems, and for the exchange rate equation in general.
    Keywords: Multi-country modelling; Cointegration; Common trends; Structural VAR; Synchronization; Exchange rate; International Economics
    JEL: C32 C51 F41
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:ukc:ukcedp:1323&r=ecm
  24. By: Miranda, Alfonso (CIDE, Mexico City); Zhu, Yu (University of Kent)
    Abstract: We investigate the extent to which deficiency at English as measured by English as Additional Language (EAL), contribute to the immigrant-native wage gap for female employees in the UK, controlling for covariates. To deal with the endogeneity of EAL and a substantial problem of self-selection into employment we suggest a 3-step estimator (TSE). The properties of this estimator are investigated in a Monte Carlo simulation study and we show evidence that TSE delivers a consistent and asymptotically normal estimator. We find a large and statistically significant causal effect of EAL on the immigrant-native wage gap for women.
    Keywords: English as Additional Language (EAL), immigrant-native wage gap, endogenous treatment, sample selection
    JEL: J15 J31 J61 C21
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp7841&r=ecm
  25. By: Alberto Abadie; Matthew M. Chingos; Martin R. West
    Abstract: Researchers and policy makers are often interested in estimating how treatments or policy interventions affect the outcomes of those most in need of help. This concern has motivated the increasingly common practice of disaggregating experimental data by groups constructed on the basis of an index of baseline characteristics that predicts the values that individual outcomes would take on in the absence of the treatment. This article shows that substantial biases may arise in practice if the index is estimated, as is often the case, by regressing the outcome variable on baseline characteristics for the full sample of experimental controls. We analyze the behavior of leave-one-out and repeated split sample estimators and show that in realistic scenarios they have substantially lower biases than the full sample estimator. We use data from the National JTPA Study and the Tennessee STAR experiment to demonstrate the performance of alternative estimators and the magnitude of their biases.
    JEL: C01 C21 C9
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:19742&r=ecm
  26. By: Christian Gourieroux (Crest, University of Toronto); Yang Lu (SCOR and Crest)
    Abstract: The increase of the expected lifetime, that is the longevity phenomenon, is accompanied by an increase of the number of seniors with a severe disability. Because of the significant costs of long term care facilities, it is important to analyze the time spent in long term care, as well as the probability of entering into this state during its lifetime, and how they evolve with longevity. Our paper considers such questions, when lifetime data are available, but long term care data are either unavailable, or too aggregated, or unreliable, as it is usually the case. We specify a joint structural model of long term care and mortality, and explain why parameters of such models are identifiable from only the lifetime data. The methodology is applied to the mortality data of French males, first with a deterministic trend and then with a dynamic factor process. Prediction formulas are then provided and illustrated using the same data. We show in particular that the expected cost of the long term care is increasing less fast than the residual life expectancy at age 50
    Keywords: Longevity, Long term care (LTC), Semi-competing risks, Unobserved heterogeneity, Dynamic frailty, Affine process, Partial Observability, Identification, Markov chain Monte-Carlo
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2013-16&r=ecm
  27. By: Worapree Maneesoonthorn; Catherine S. Forbes; Gael M. Martin
    Abstract: This paper investigates the dynamic behaviour of jumps in financial prices and volatility. The proposed model is based on a standard jump diffusion process for price and volatility augmented by a bivariate Hawkes process for the two jump components. The latter process speci.es a joint dynamic structure for the price and volatility jump intensities, with the intensity of a volatility jump also directly affected by a jump in the price. The impact of certain aspects of the model on the higher-order conditional moments for returns is investigated. In particular, the differential effects of the jump intensities and the random process for latent volatility itself, are measured and documented. A state space representation of the model is constructed using both financial returns and non-parametric measures of integrated volatility and price jumps as the observable quantities. Bayesian inference, based on a Markov chain Monte Carlo algorithm, is used to obtain a posterior distribution for the relevant model parameters and latent variables, and to analyze various hypotheses about the dynamics in, and the relationship between, the jump intensities. An extensive empirical investigation using data based on the S&P500 market index over a period ending in early-2013 is conducted. Substantial empirical support for dynamic jump intensities is documented, with predictive accuracy enhanced by the inclusion of this type of specification. In addition, movements in the intensity parameter for volatility jumps are found to track key market events closely over this period.
    Keywords: and phrases: Dynamic price and volatility jumps; Stochastic volatility; Hawkes process; Nonlinear state space model; Bayesian Markov chain Monte Carlo; Global financial cri-
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2013-28&r=ecm
  28. By: Dong Hwan Oh; Andrew J. Patton
    Abstract: This paper proposes a new class of copula-based dynamic models for high dimension conditional distributions, facilitating the estimation of a wide variety of measures of systemic risk. Our proposed models draw on successful ideas from the literature on modeling high dimension covariance matrices and on recent work on models for general time-varying distributions. Our use of copula-based models enable the estimation of the joint model in stages, greatly reducing the computational burden. We use the proposed new models to study a collection of daily credit default swap (CDS) spreads on 100 U.S. firms over the period 2006 to 2012. We find that while the probability of distress for individual firms has greatly reduced since the financial crisis of 2008-09, the joint probability of distress (a measure of systemic risk) is substantially higher now than in the pre-crisis period.
    Keywords: correlation, tail risk, financial crises, DCC
    JEL: C32 C58 G01
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:duk:dukeec:13-30&r=ecm
  29. By: Madsen, Edith; Mulalic, Ismir; Pilegaard, Ninette
    Abstract: This paper presents a stylized econometric model for the demand for on-street parking with focus on estimation of the elasticity of demand with respect to the full cost of parking. The full cost of parking consists of a parking fee and the cost of searching for a vacant parking space (cruising). The cost of cruising is usually unobserved. Ignoring this issue implies a downward bias of the elasticity of demand with respect to the total cost of parking since the cost of cruising depends on the number of cars parked. We also demonstrate that, even when the cost of cruising is unobserved, the demand elasticity can be identified by extending the econometric model to include the spatial interaction between the parking facilities. We illustrate the model with on-street parking data from Copenhagen and find indications of a somewhat greater parking demand elasticity than is usually reported in the literature.
    Keywords: on-street parking, demand estimation.
    JEL: C51 L91 R41
    Date: 2013–12–16
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:52301&r=ecm
  30. By: Guell, Maia; Mora, Jose V. Rodriguez; Telmer, Christopher I.
    Abstract: We propose a new methodology for measuring intergenerational mobility in economic wellbeing. Our method is based on the joint distribution of surnames and economic outcomes. It circumvents the need for intergenerational panel data, a long-standing stumbling block for understanding mobility. A single cross-sectional dataset is su cient. Our main idea is simple. If `inheritance' is important for economic outcomes, then rare surnames should predict economic outcomes in the cross-section. This is because rare surnames are indicative of familial linkages. Of course, if the number of rare surnames is small, this won't work. But rare surnames are abundant in the highly-skewed nature of surname distributions from most Western societies. We develop a model that articulates this idea and shows that the more important is inheritance, the more informative will be surnames. This result is robust to a variety of di erent assumptions about fertility and mating. We apply our method using the 2001 census from Catalonia, a large region of Spain. We use educational attainment as a proxy for overall economic well-being. Our main nding is that mobility has decreased among the di erent generations of the 20th century. A complementary analysis based on sibling correlations con rms our results and provides a robustness check on our method. Our model and our data allow us to examine one possible explanation for the observed decrease in mobility. We nd that the degree of assortative mating has increased over time. Overall, we argue that our method has promise because it can tap the vast mines of census data that are available in a heretofore unexploited manner.
    Keywords: Surnames, intergenerational mobility, cross-sectional data analysis, population genetics, assortative mating, siblings,
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:edn:sirdps:499&r=ecm
  31. By: Leo Krippner
    Abstract: Faster extended Kalman filter estimations of zero lower bound models of the term structure are possible if the analytic properties of the Jacobian matrix for the measurement equation are exploited. I show that such results are straighforward to incorporate, at least in Monte-Carlo-based implementations, and that will facilitate fast and robust estimations of zero lower bound term structure models with the iterated extended Kalman filter.
    Keywords: Black framework, zero lower bound, shadow short rate, term structure model
    JEL: C18 E43 G12
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2013-77&r=ecm
  32. By: Jonas E. Arias; Juan Rubio-Ramirez; Daniel F. Waggoner
    Abstract: Are optimism shocks an important source of business cycle fluctuations? Are deficit-financed tax cuts better than deficit-financed spending to increase output? These questions have been previously studied using SVARs identified with sign and zero restrictions and the answers have been positive and definite in both cases. While the identification of SVARs with sign and zero restrictions is theoretically attractive because it allows the researcher to remain agnostic with respect to the responses of the key variables of interest, we show that current implementation algorithms do not respect the agnosticism of the theory. These algorithms impose additional sign restrictions on variables that are seemingly unrestricted that bias the results and produce misleading confidence intervals. We provide an alternative and efficient algorithm that does not introduce any additional sign restriction, hence preserving the agnosticism of the theory. Without the additional restrictions, it is hard to support the claim that either optimism shocks are an important source of business cycle uctuations or deficit-financed tax cuts work best at improving output. Our algorithm is not only correct but also faster than current ones.
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:fda:fdaddt:2013-24&r=ecm
  33. By: Adam D. Bull
    Abstract: In quantitative finance, we often wish to recover the volatility of asset prices given by a noisy It\=o semimartingale. Existing estimates, however, lose accuracy when the jumps are of infinite variation, as is suggested by empirical evidence. In this paper, we show that when the efficient prices are given by an unknown time-changed L\'evy process, the rate of time change, which plays the role of the volatility, can be estimated well under arbitrary jump activity. We further show that our estimate remains valid for the volatility in the general semimartingale model, obtaining convergence rates as good as any previously implied in the literature.
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1312.5911&r=ecm
  34. By: Ting Zhang; Hwai-Chung Ho; Martin Wendler; Wei Biao Wu
    Abstract: The paper considers the block sampling method for long-range dependent processes. Our theory generalizes earlier ones by Hall, Jing and Lahiri (1998) on functionals of Gaussian processes and Nordman and Lahiri (2005) on linear processes. In particular, we allow nonlinear transforms of linear processes. Under suitable conditions on physical dependence measures, we prove the validity of the block sampling method. The problem of estimating the self-similar index is also studied.
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1312.5807&r=ecm
  35. By: Reinhold Heinlein; Hans-Martin Krolzig
    Abstract: We study the exchange rate effects of monetary policy in a balanced macroeconometric two-country model for the US and UK. In contrast to the empirical literature on the 'delayed overshooting puzzle', which consistently treats the domestic and foreign countries unequally in themodelling process, we consider the full model feedback, allowing for a thorough analysis of the system dynamics. The consequential inevitable problem of model dimensionality is tackled in this paper by invoking the approach by Aoki (1981) commonly used in economic theory. Assuming country symmetry in the long-run allows to decouple the two-country macro dynamics of country averages and country differences such that the cointegration analysis can be applied to much smaller systems. Secondly the econometric modelling is general-to-specific, a graph-theoretic approach for the contemporaneous effects combined with an automatic general-to-specific model selection. The resulting parsimonious structural vector equilibrium correction model ensures highly significant impulse responses, revealing a delayed overshooting of the exchange rate in the case of a Bank of England monetary shock but suggests an instantaneous response to a Fed shock. Altogether the response is more pronounced in the former case.
    Keywords: Two-country model; Cointegration; Structural VAR; Gets Model Selection; Monetary Policy; Exchange Rates
    JEL: C22 C32 C50
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:ukc:ukcedp:1321&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.