nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒12‒01
sixteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Quantile-based inference and estimation of heavy-tailed distributions By Yves Dominicy
  2. Robust estimation of nonstationary, fractionally integrated, autoregressive, stochastic volatility By Jensen, Mark J.
  3. Meta-analytic cointegrating rank tests for dependent panels By Deniz Dilan Karaman Örsal; Antonia Arsova
  4. Joint inference on market and estimation risks in dynamic portfolios By Francq, Christian; Zakoian, Jean-Michel
  5. Nonparametric Estimation in case of Endogenous Selection By Christoph Breunig; Enno Mammen; Anna Simoni;
  6. "On Effects of Jump and Noise in High-Frequency Financial Econometrics" By Naoto Kunitomo; Daisuke Kurisu
  7. Uncovering the evolution of non-stationary stochastic variables: the example of asset volume-price fluctuations By Paulo Rocha; Frank Raischel; Jo\~ao P. Boto; Pedro G. Lind
  8. Correlated Defaults of UK Banks: Dynamics and Asymmetries By Mario Cerrato; John Crosby; Minjoo Kim; Yang Zhao
  9. Sieve-based inference for infinite-variance linear processes By Giuseppe Cavaliere; Iliyan Georgiev; A.M. Robert Taylor
  10. Estimating Discrete-Continuous Choice Models: The Endogenous Grid Method with Taste Shocks By Fedor Iskhakov; Thomas Høgholm Jørgensen; John Rust; Bertel Schjerning
  11. Forecasting With High Dimensional Panel VARs By Gary Koop; Dimitris Korobilis
  12. Regime shift model by three types of distribution considering a heavy tail and dependence By Jungwoo kim; Joocheol kim
  13. A new approach to multi-step forecasting using dynamic stochastic general equilibrium models By Kapetanious, George; Price, Simon; Theodoridis, Konstantinos
  14. Supply Function Competition and Exporters: Nonparametric Identification and Estimation of Productivity Distributions and Marginal Costs By Quang Vuong; Ayse Pehlivan
  15. he Multivariate DCC-GARCH Model with Interdependence among Markets in Conditional Variances’ Equations By Marcin Faldziñski; Michal Bernard Pietrzak
  16. Going Beyond LATE: Bounding Average Treatment Effects of Job Corps Training By Chen, Xuan; Flores, Carlos A.; Flores-Lagunes, Alfonso

  1. By: Yves Dominicy
    Abstract: This thesis is divided in four chapters. The two first chapters introduce a parametric quantile-based estimation method of univariate heavy-tailed distributions and elliptical distributions, respectively. If one is interested in estimating the tail index without imposing a parametric form for the entire distribution function, but only on the tail behaviour, we propose a multivariate Hill estimator for elliptical distributions in chapter three. In the first three chapters we assume an independent and identically distributed setting, and so as a first step to a dependent setting, using quantiles, we prove in the last chapter the asymptotic normality of marginal sample quantiles for stationary processes under the S-mixing condition.<p><p><p>The first chapter introduces a quantile- and simulation-based estimation method, which we call the Method of Simulated Quantiles, or simply MSQ. Since it is based on quantiles, it is a moment-free approach. And since it is based on simulations, we do not need closed form expressions of any function that represents the probability law of the process. Thus, it is useful in case the probability density functions has no closed form or/and moments do not exist. It is based on a vector of functions of quantiles. The principle consists in matching functions of theoretical quantiles, which depend on the parameters of the assumed probability law, with those of empirical quantiles, which depend on the data. Since the theoretical functions of quantiles may not have a closed form expression, we rely on simulations.<p><p><p>The second chapter deals with the estimation of the parameters of elliptical distributions by means of a multivariate extension of MSQ. In this chapter we propose inference for vast dimensional elliptical distributions. Estimation is based on quantiles, which always exist regardless of the thickness of the tails, and testing is based on the geometry of the elliptical family. The multivariate extension of MSQ faces the difficulty of constructing a function of quantiles that is informative about the covariation parameters. We show that the interquartile range of a projection of pairwise random variables onto the 45 degree line is very informative about the covariation.<p><p><p>The third chapter consists in constructing a multivariate tail index estimator. In the univariate case, the most popular estimator for the tail exponent is the Hill estimator introduced by Bruce Hill in 1975. The aim of this chapter is to propose an estimator of the tail index in a multivariate context; more precisely, in the case of regularly varying elliptical distributions. Since, for univariate random variables, our estimator boils down to the Hill estimator, we name it after Bruce Hill. Our estimator is based on the distance between an elliptical probability contour and the exceedance observations. <p><p><p>Finally, the fourth chapter investigates the asymptotic behaviour of the marginal sample quantiles for p-dimensional stationary processes and we obtain the asymptotic normality of the empirical quantile vector. We assume that the processes are S-mixing, a recently introduced and widely applicable notion of dependence. A remarkable property of S-mixing is the fact that it doesn't require any higher order moment assumptions to be verified. Since we are interested in quantiles and processes that are probably heavy-tailed, this is of particular interest.<p>
    Keywords: Finance -- Econometric models; Distribution (Probability theory); Estimation theory; Finances -- Modèles économétriques; Distribution (Théorie des probabilités); Théorie de l'estimation; Tail index; Quantiles; Simulation; Elliptical distributions; Heavy-tailed distributions; Estimation
    Date: 2014–04–18
    URL: http://d.repec.org/n?u=RePEc:ulb:ulbeco:2013/209311&r=ecm
  2. By: Jensen, Mark J. (Federal Reserve Bank of Atlanta)
    Abstract: Empirical volatility studies have discovered nonstationary, long-memory dynamics in the volatility of the stock market and foreign exchange rates. This highly persistent, infinite variance—but still mean reverting—behavior is commonly found with nonparametric estimates of the fractional differencing parameter d, for financial volatility. In this paper, a fully parametric Bayesian estimator, robust to nonstationarity, is designed for the fractionally integrated, autoregressive, stochastic volatility (SV-FIAR) model. Joint estimates of the autoregressive and fractional differencing parameters of volatility are found via a Bayesian, Markov chain Monte Carlo (MCMC) sampler. Like Jensen (2004), this MCMC algorithm relies on the wavelet representation of the log-squared return series. Unlike the Fourier transform, where a time series must be a stationary process to have a spectral density function, wavelets can represent both stationary and nonstationary processes. As long as the wavelet has a sufficient number of vanishing moments, this paper's MCMC sampler will be robust to nonstationary volatility and capable of generating the posterior distribution of the autoregressive and long-memory parameters of the SV-FIAR model regardless of the value of d. Using simulated and empirical stock market return data, we find our Bayesian estimator producing reliable point estimates of the autoregressive and fractional differencing parameters with reasonable Bayesian confidence intervals for either stationary or nonstationary SV-FIAR models.
    Keywords: Bayes; infinite variance; long-memory; Markov chain Monte Carlo; mean-reverting; wavelets
    JEL: C11 C14 C22
    Date: 2015–11–01
    URL: http://d.repec.org/n?u=RePEc:fip:fedawp:2015-12&r=ecm
  3. By: Deniz Dilan Karaman Örsal (Leuphana University Lueneburg, Germany); Antonia Arsova (Leuphana University Lueneburg, Germany)
    Abstract: This paper proposes two new panel cointegrating rank tests which are robust to cross-sectional dependency. The dependence in the data generating process is modeled using unobserved common factors. The new tests are based on a metaanalytic approach, in which the p-values of the individual likelihood-ratio (LR) type test statistics computed from defactored data are combined to develop the panel statistics. A simulation study shows that the tests have reasonable size and power properties in finite samples.
    Keywords: Panel cointegration; p-value; common factors; rank test; crosssectional dependence
    JEL: C12 C15 C33
    Date: 2015–11
    URL: http://d.repec.org/n?u=RePEc:lue:wpaper:349&r=ecm
  4. By: Francq, Christian; Zakoian, Jean-Michel
    Abstract: We study the estimation risk induced by univariate and multivariate methods for evaluating the conditional Value-at-Risk (VaR) of a portfolio of assets. The composition of the portfolio can be time-varying and the individual returns are assumed to follow a general multivariate dynamic model. Under sphericity of the innovations distribution, we introduce in the multivariate framework a concept of VaR parameter, and we establish the asymptotic distribution of its estimator. A multivariate Filtered Historical Simulation method, which does not rely on sphericity, is also studied. We derive asymptotic confidence intervals for the conditional VaR, which allow to quantify simultaneously the market and estimation risks. The particular case of minimal variance and minimal VaR portfolios is considered. Potential usefulness, feasibility and drawbacks of the different approaches are illustrated via Monte-Carlo experiments and an empirical study based on stock returns.
    Keywords: Confidence Intervals for VaR; DCC GARCH model, Estimation risk; Filtered Historical Simulation; Optimal Dynamic Portfolio
    JEL: C13 C22 C58
    Date: 2015–11
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:68100&r=ecm
  5. By: Christoph Breunig; Enno Mammen; Anna Simoni;
    Abstract: This paper addresses the problem of estimation of a nonparametric regression function from selectively observed data when selection is endogenous. Our approach relies on independence between covariates and selection conditionally on potential outcomes. Endogeneity of regressors is also allowed for. In both cases, consistent two-step estimation procedures are proposed and their rates of convergence are derived. Also pointwise asymptotic distribution of the estimators is established. In addition, we propose a nonparametric specification test to check the validity of our independence assumption. Finite sample properties are illustrated in a Monte Carlo simulation study and an empirical illustration.
    Keywords: Endogenous selection, instrumental variable, sieve minimum distance, regression estimation, convergence rate, asymptotic normality, hypothesis testing, inverse problem
    JEL: C14 C26
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2015-050&r=ecm
  6. By: Naoto Kunitomo (Faculty of Economics, The University of Tokyo); Daisuke Kurisu (Graduate School of Economics, The University of Tokyo)
    Abstract: Several new statistical procedures for high frequency financial data analysis have been developed for estimating risk quantities and testing the presence of jump in the underlying continuous-time financial processes. Although the role of micro-market noise is important in high frequency financial data, there are some basic questions on the effects of presence of noise and jump in the underlying stochastic processes. When there can be jump and (micro-market) noise at the same time, it is not obvious whether the existing statistical methods are reliable or not for the applications in actual data analysis. We investigate the misspecification effects of jump and noise on some basic statistics and the testing procedures for jumps proposed by Ait-Sahalia and Jacod (2009, Annals of Statistics) as an illustration. We have found that their firrst test is asymptotically robust in the small-noise asymptotic sense against possible misspecification while their second test is quite sensitive to the presence of noise.
    Date: 2015–11
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2015cf996&r=ecm
  7. By: Paulo Rocha; Frank Raischel; Jo\~ao P. Boto; Pedro G. Lind
    Abstract: We present a framework for describing the evolution of stochastic observables having a non-stationary distribution of values. The framework is applied to empirical volume-prices from assets traded at the New York stock exchange. Using Kullback-Leibler divergence we evaluate the best model out from four biparametric models standardly used in the context of financial data analysis. In our present data sets we conclude that the inverse $\Gamma$-distribution is a good model, particularly for the distribution tail of the largest volume-price fluctuations. Extracting the time-series of the corresponding parameter values we show that they evolve in time as stochastic variables themselves. For the particular case of the parameter controlling the volume-price distribution tail we are able to extract an Ornstein-Uhlenbeck equation which describes the fluctuations of the largest volume-prices observed in the data. Finally, we discuss how to bridge from the stochastic evolution of the distribution parameters to the stochastic evolution of the (non-stationary) observable and put our conclusions into perspective for other applications in geophysics and biology.
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1510.07280&r=ecm
  8. By: Mario Cerrato; John Crosby; Minjoo Kim; Yang Zhao
    Abstract: We document asymmetric and time-varying features of dependence between the credit risks of global systemically important banks (G-SIBs) in the UK banking industry using a CDS dataset. We model the dependence of CDS spreads using a dynamic asymmetric cop- ula. Comparing our model with traditional copula models, we find that they usually under- estimate the probability of joint (or conditional) default in the UK G-SIBs. Furthermore, we show that dynamics and asymmetries between CDS spreads are closely associated with the probabilities of joint (or conditional) default through the extensive regression analysis. Especially, our regression analysis provides a policy implication that copula correlation or tail dependence coefficients are able to be leading indicators for the systemic credit event.
    Keywords: Calibrated marginal default probability, probability of joint default, probability of conditional default, GAS-based GHST copula.
    JEL: C32 G32
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:gla:glaewp:2015_24&r=ecm
  9. By: Giuseppe Cavaliere (Università di Bologna); Iliyan Georgiev (Universidade Nova de Lisboa); A.M. Robert Taylor (University of Essex)
    Abstract: We extend the available asymptotic theory for autoregressive sieve estimators to cover the case of stationary and invertible linear processes driven by independent identically distributed (i.i.d.) infinite variance (IV) innovations. We show that the ordinary least squares sieve estimates, together with estimates of the impulse responses derived from these, obtained from an autoregression whose order is an increasing function of the sample size, are consistent and exhibit asymptotic properties analogous to those which obtain for a finite-order autoregressive process driven by i.i.d. IV errors. As these limit distributions cannot be directly employed for inference because they either may not exist or, where they do, depend on unknown parameters, a second contribution of the paper is to investigate the usefulness of bootstrap methods in this setting. Focusing on three sieve bootstraps: the wild and permutation bootstraps, and a hybrid of the two, we show that, in contrast to the case of finite variance innovations, the wild bootstrap requires an infeasible correction to be consistent, whereas the other two bootstrap schemes are shown to be consistent (the hybrid for symmetrically distributed innovations) under general conditions.
    Keywords: Bootstrap, Sieve autoregression, Infinite variance, Time Series
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:bot:quadip:wpaper:129&r=ecm
  10. By: Fedor Iskhakov (ARC Centre of Excellence in Population Ageing Research, University New South Wales); Thomas Høgholm Jørgensen (Department of Economics, University of Copenhagen); John Rust (Department of Economics, Georgetown University); Bertel Schjerning (Department of Economics, University of Copenhagen)
    Abstract: We present a fast and accurate computational method for solving and estimating a class of dynamic programming models with discrete and continuous choice variables. The solution method we develop for structural estimation extends the endogenous gridpoint method (EGM) to discrete-continuous (DC) problems. Discrete choices can lead to kinks in the value functions and discontinuities in the optimal policy rules, greatly complicating the solution of the model. We show how these problems are ameliorated in the presence of additive choice-specic IID extreme value taste shocks. We present Monte Carlo experiments that demonstrate the reliability and eciency of the DC-EGM and the associated Maximum Likelihood estimator for structural estimation of a life cycle model of consumption with discrete retirement decisions.
    Keywords: Structural estimation, lifecycle model, discrete and continuous choice, retirement choice, endogenous gridpoint method, nested xed point algorithm, extreme value taste shocks, smoothed max function.
    JEL: C13 C63 D91
    Date: 2015–11–27
    URL: http://d.repec.org/n?u=RePEc:kud:kuiedp:1519&r=ecm
  11. By: Gary Koop; Dimitris Korobilis
    Abstract: In this paper, we develop econometric methods for estimating large Bayesian timevarying parameter panel vector autoregressions (TVP-PVARs) and use these methods to forecast inflation for euro area countries. Large TVP-PVARs contain huge numbers of parameters which can lead to over-parameterization and computational concerns. To overcome these concerns, we use hierarchical priors which reduce the dimension of the parameter vector and allow for dynamic model averaging or selection over TVP-PVARs of different dimension and different priors. We use forgetting factor methods which greatly reduce the computational burden. Our empirical application shows substantial forecast improvements over plausible alternatives.
    Keywords: Panel VAR, inflation forecasting, Bayesian, time-varying parameter model
    Date: 2015–11
    URL: http://d.repec.org/n?u=RePEc:gla:glaewp:2015_25&r=ecm
  12. By: Jungwoo kim (Yonsei University); Joocheol kim (Yonsei University)
    Abstract: I adopt a regime shift model to investigate a shift of distribution of each regime during a time series data. Unlike previous studies, I applied three types of distribution to use a regime shift model, i.e., normal, GEV and stable distribution, which allows me to consider a heavy tail regime in the model. From some theoretical basis and empirical results, I find that the regime shift model in stable distribution is best appropriate. I also find that tail index of the innovation and dependence measure move together, implying dependence among a consecutive data may lead extreme event and vice versa.
    Keywords: regime shift model, tail index, dependence measure, extreme event
    Date: 2015–11
    URL: http://d.repec.org/n?u=RePEc:yon:wpaper:2015rwp-86&r=ecm
  13. By: Kapetanious, George (Bank of England); Price, Simon (Bank of England); Theodoridis, Konstantinos (Bank of England)
    Abstract: DSGE models are of interest because they offer structural interpretations, but are also increasingly used for forecasting. Estimation often proceeds by methods which involve building the likelihood by one-step ahead (h=1) prediction errors. However in principle this can be done using different horizons where h>1. Using the well-known model of Smets and Wouters (2007), for h=1 classical ML parameter estimates are similar to those originally reported. As h extends some estimated parameters change, but not to an economically significant degree. Forecast performance is often improved, in several cases significantly.
    Keywords: DSGE models; multi-step prediction errors; forecasting.
    Date: 2015–11–20
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0567&r=ecm
  14. By: Quang Vuong (New York University); Ayse Pehlivan (Bilkent University)
    Abstract: In this paper we develop a structural model in which exporters are competing in supply functions and study the nonparametric identification and estimation of productivity distributions and marginal costs in this framework using disaggregated bilateral trade data. Our model is able to reconcile the existence of multiple sellers, multiple prices, and variable markups that we observe in data and also incorporates features such as strategic pricing and incomplete information. Our identification and estimation methodology gains insights from methodologies used in empirical auctions. Our identification and estimation methodology makes an important contribution to the empirical auction literature by showing that the underlying structure is identified nonparametrically even if we do not observe the entire schedules, but only the transaction points instead; whereas the methodology in the literature of empirical auctions depends heavily on the fact that the entire bid/supply schedule is observed. Moreover, in view of the recent studies in international trade that have shown the sensitivity of the gains from trade estimates to the parametrization of productivity distributions, maintaining a flexible structure for productivity distributions is very important. We apply our model to the German market for manufacturing imports for 1990 using disaggregated bilateral trade data, which consists only of trade values and traded quantities. We recover the destination-source specific productivity distributions and destination-source specific marginal cost functions nonparametrically. Our empirical results do not support the distributional assumptions that are commonly made in the international trade literature such as Fréchet and Pareto. In particular, we find that the productivity distributions are not unimodal; low productivities are more likely to occur as expected, but there is not a single mode. Our results provide important insights about cross country and cross destination differences in productivity distributions, trade costs and markups.
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:red:sed015:1414&r=ecm
  15. By: Marcin Faldziñski (Nicolaus Copernicus University, Poland); Michal Bernard Pietrzak (Nicolaus Copernicus University, Poland)
    Abstract: The article seeks to investigate the issue of interdependence that during crisis periods in the capital markets is of particular importance due to the likelihood of causing a crisis in the real economy. The research objective of the article is to identify this interdependence in volatility. Therefore, first we propose our own modification of the DCC-GARCH model which is so designed as to test for interdependence in conditional variance. Then, the DCC-GARCH-In model was used to study interdependence in volatility of selected stock market indices. The results of the research confirmed the presence of interdependence among the selected markets.Creation-Date: 2015-11
    Keywords: DCC-GARCH model, interdependence, conditional variance
    JEL: C32
    Date: 2015–11
    URL: http://d.repec.org/n?u=RePEc:pes:wpaper:2015:no164&r=ecm
  16. By: Chen, Xuan (Renmin University of China); Flores, Carlos A. (California Polytechnic State University); Flores-Lagunes, Alfonso (Syracuse University)
    Abstract: We derive nonparametric sharp bounds on average treatment effects with an instrumental variable (IV) and use them to evaluate the effectiveness of the Job Corps (JC) training program for disadvantaged youth. We concentrate on the population average treatment effect (ATE) and the average treatment effect on the treated (ATT), which are parameters not point identified with an IV under heterogeneous treatment effects. The main assumptions employed to bound the ATE and ATT are monotonicity in the treatment of the average outcomes of specified subpopulations, and mean dominance assumptions across the potential outcomes of these subpopulations. Importantly, the direction of the mean dominance assumptions can be informed from data, and some of our bounds do not require an outcome with bounded support. We employ these bounds to assess the effectiveness of the JC program using data from a randomized social experiment with non-compliance (a common feature of social experiments). Our empirical results indicate that the effect of JC on eligible applicants (the target population) four years after randomization is to increase weekly earnings and employment by at least $24.61 and 4.3 percentage points, respectively, and to decrease yearly dependence on public welfare benefits by at least $84.29. Furthermore, the effect of JC on participants (the treated population) is to increase weekly earnings by between $28.67 and $43.47, increase employment by between 4.9 and 9.3 percentage points, and decrease public benefits received by between $108.72 and $140.29. Our results also point to positive average effects of JC on the labor market outcomes of those individuals who decide not to enroll in JC regardless of their treatment assignment (the so-called never takers), suggesting that these individuals would indeed benefit from participating in JC.
    Keywords: training programs, program evaluation, average treatment effects, bounds, instrumental variables
    JEL: J30 C13 C21
    Date: 2015–11
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp9511&r=ecm

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.