nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒01‒26
twelve papers chosen by
Sune Karlsson
Örebro universitet

  1. "Bayesian Modeling of Dynamic Extreme Values: Extension of Generalized Extreme Value Distributions with Latent Stochastic Processes " By Jouchi Nakajima; Tsuyoshi Kunihama; Yasuhiro Omori
  2. Instrument-free Identification and Estimation of Differentiated Products Models By David Byrne; Susumu Imai; Vasilis Sarafidis; Masayuki Hirukawa
  3. Pricing Kernel Modeling By Denis Belomestny; Shujie Ma; Wolfgang Karl Härdle;
  4. Multiperiod optimal hedging ratios: Methodological aspects and application to wheat markets By Gianluca, Stefani; Tiberti, Marco
  5. Nonparametric Estimates for Conditional Quantiles of Time Series By Jürgen Franke; Peter Mwita; Weining Wang;
  6. Robust Inference of Risks of Large Portfolios By Jianqing Fan; Fang Han; Han Liu; Byron Vickers
  7. Efficient propensity score regression estimators of multi-valued treatment effects for the treated By Ying-Ying Lee
  8. Weak diffusion limits of dynamic conditional correlation models By Christian M. Hafner; Sebastien Laurent; Francesco Violante
  9. Indirect Inference Estimation of Nonlinear Dynamic General Equilibrium Models : With an Application to Asset Pricing under Skewness Risk By Francisco RUGE-MURCIA
  10. Combined Density Nowcasting in an uncertain economic environment By Knut Are Aastveit; Francesco Ravazzolo; Herman K. van Dijk
  11. On Estimation of Gravity Equation: A Cluster Analysis By Bozena Bobkova
  12. Classification over bipartite graphs through projection By STANKOVA, Marija; MARTENS, David; PROVOST, Foster

  1. By: Jouchi Nakajima (Bank of Japan,); Tsuyoshi Kunihama (Department of Statistical Science, Duke University); Yasuhiro Omori (Faculty of Economics, The University of Tokyo)
    Abstract: This paper develops Bayesian inference of extreme value models with a exible time- dependent latent structure. The generalized extreme value distribution is utilized to incorporate state variables that follow an autoregressive moving average (ARMA) process with Gumbel-distributed innovations. The time-dependent extreme value distribution is combined with heavy-tailed error terms. An efficient Markov chain Monte Carlo algorithm is proposed using a state space representation with a mixture of normal distribution approximating the Gumbel distribution. The methodology is illustrated using extreme data of stock returns and electricity demand. Estimation results show the usefulness of the proposed model and evidence that the latent autoregressive process and heavy-tailed errors plays an important role to describe the monthly series of minimum stock returns and maximum electricity demand.
    Date: 2015–01
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2015cf952&r=ecm
  2. By: David Byrne (University of Melbourne); Susumu Imai (UTS and Queen's University); Vasilis Sarafidis (Monash University); Masayuki Hirukawa (Setsunan University)
    Abstract: We propose a new methodology for estimating the demand and cost functions of differentiated products models when demand and cost data are available. The method deals with the endogeneity of prices to demand shocks and the endogeneity of outputs to cost shocks, but does not require instruments for identification. We establish non-parametric identification, consistency and asymptotic normality of our estimator. Using Monte-Carlo experiments, we show our method works well in contexts where instruments are correlated with demand and cost shocks, and where commonly-used instrumental variables estimators are biased and numerically unstable.
    Keywords: Instrument-free, Differentiated goods oligopoly, BLP, parametric identification, nonparametric identification, sieve
    JEL: C13 C14 L13 L41
    Date: 2015–01
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1336&r=ecm
  3. By: Denis Belomestny; Shujie Ma; Wolfgang Karl Härdle;
    Abstract: We propose a new method to estimate the empirical pricing kernel based on option data. We estimate the pricing kernel nonparametrically by using the ratio of the risk-neutral density estimator and the subjective density estimator. The risk-neutral density is approximated by a weighted kernel density estimator with varying unknown weights for different observations, and the subjective density is approximated by a kernel density estimator with equal weights. We represent the European call option price function by the second order integration of the risk-neutral density, so that the unknown weights are obtained through one-step penalized least squares estimation with the Kullback-Leibler divergence as the penalty function. Asymptotic results of the resulting estimators are established. The performance of the proposed method is illustrated empirically by simulation and real data application studies.
    Keywords: Empirical Pricing Kernel; Kernel; Kernel Density Estimation; Nonparametric Fitting; Kullback-Leibler Diverg
    JEL: C00 C14 G12
    Date: 2015–01
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2015-001&r=ecm
  4. By: Gianluca, Stefani; Tiberti, Marco
    Abstract: This work deals with methodological and empirical issues related to multiperiod optimal hedging OLS estimators. We propose an analytical formula for the multiperiod minimum variance hedging ratio starting from the triangular representation of a cointegrated system DGP. Since estimating the hedge ratio matching the frequency of data with the hedging horizon leads to a sample size reduction problem, we carry out a Monte Carlo study to investigate the pattern and hedging efficiency of OLS hedging ratio based on overlapping vs non-overlapping observations exploring a range of hedging horizons and sample sizes. Finally, we applied our approach to real data for a cross hedging related to soft wheat.
    Keywords: Risk and Uncertainty,
    Date: 2014–08
    URL: http://d.repec.org/n?u=RePEc:ags:eaae14:182787&r=ecm
  5. By: Jürgen Franke; Peter Mwita; Weining Wang;
    Abstract: We consider the problem of estimating the conditional quantile of a time series fYtg at time t given covariates Xt, where Xt can ei- ther exogenous variables or lagged variables of Yt . The conditional quantile is estimated by inverting a kernel estimate of the conditional distribution function, and we prove its asymptotic normality and uni- form strong consistency. The performance of the estimate for light and heavy-tailed distributions of the innovations are evaluated by a simulation study. Finally, the technique is applied to estimate VaR of stocks in DAX, and its performance is compared with the existing standard methods using backtesting.
    Keywords: conditional quantile, kernel estimate, quantile autoregression, time series, uniform consistency, value-at-risk
    JEL: C00 C14 C50 C58
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2014-012&r=ecm
  6. By: Jianqing Fan; Fang Han; Han Liu; Byron Vickers
    Abstract: We propose a bootstrap-based robust high-confidence level upper bound (Robust H-CLUB) for assessing the risks of large portfolios. The proposed approach exploits rank-based and quantile-based estimators, and can be viewed as a robust extension of the H-CLUB method (Fan et al., 2015). Such an extension allows us to handle possibly misspecified models and heavy-tailed data. Under mixing conditions, we analyze the proposed approach and demonstrate its advantage over the H-CLUB. We further provide thorough numerical results to back up the developed theory. We also apply the proposed method to analyze a stock market dataset.
    Date: 2015–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1501.02382&r=ecm
  7. By: Ying-Ying Lee
    Abstract: We study the role of the propensity scores in estimating treatment effects for the treated with a multi-valued treatment.  Assume assignment to one of the multiple treatments is random given observed characteristics.  Valid causal comparisons for the subpopulation who has been treated a particular treatment level are based on two propensity scores - one for the treatment level and one for the counterfactual level.  In contrast to the binary treatment case, these two propensity scores do not add up to one.  This is the key feature that allows us to distinguish different roles of the propensity scores and to provide new insight in well-known paradoxes in the binary treatment effect and missing data literature.  We formally show that knowledge of the propensity score for the treated level decreases the semiparametric efficiency bound, regardless of knowledge of the propensity score for the counterfactual level.  We propose efficient kernel regression estimators that project on a nonparametrically estimated propensity score for the counterfactual level and the true propensity score for the treated level.  A surprising result is implied for the binary treatment effect for the treated: when the propensity scores are known, using one estimated propensity score is not efficient.  Our efficient estimator regresses on a normalized propensity score that utilizes the information contained in the nonparametrically estimated and the true propensity scores.
    Keywords: propensity score, multi-valued treatment, semiparametric efficiency bound, unconfoundedness, generated regressor
    JEL: C01 C14 C21
    Date: 2015–01–09
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:738&r=ecm
  8. By: Christian M. Hafner (Université catholique de Louvain, ISBA & CORE); Sebastien Laurent (Aix-Marseille University (Aix-Marseille School of Economics)); Francesco Violante (Aarhus University and CREATES)
    Abstract: The properties of dynamic conditional correlation (DCC) models are still not entirely understood. This paper fills one of the gaps by deriving weak diffusion limits of a modified version of the classical DCC model. The limiting system of stochastic differential equations is characterized by a diffusion matrix of reduced rank. The degeneracy is due to perfect collinearity between the innovations of the volatility and correlation dynamics. For the special case of constant conditional correlations, a non-degenerate diffusion limit can be obtained. Alternative sets of conditions are considered for the rate of convergence of the parameters, obtaining time-varying but deterministic variances and/or correlations. A Monte Carlo experiment confirms that the quasi approximate maximum likelihood (QAML) method to estimate the diffusion parameters is inconsistent for any fixed frequency, but that it may provide reasonable approximations for sufficiently large frequencies and sample sizes.
    Keywords: cDCC, Weak diffusion limits, QAML, CCC, GARCH diffusion
    JEL: C13 C22 C51
    Date: 2015–01–14
    URL: http://d.repec.org/n?u=RePEc:aah:create:2015-03&r=ecm
  9. By: Francisco RUGE-MURCIA
    Abstract: This paper proposes a nonlinear impulse-response matching procedure explicitly designed to estimate nonlinear dynamic models, and illustrates its applicability by estimating a macro-fi…nance model of asset pricing under skewness risk. As auxiliary model, a new class of nonlinear vector autoregressions (NVAR) based on Mittnik (1990) is proposed.
    Keywords: nonlinear vector autoregression, nonlinear impulse responses, skewness risk
    JEL: C51 C58
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:mtl:montec:15-2014&r=ecm
  10. By: Knut Are Aastveit (Norges Bank (Central Bank of Norway)); Francesco Ravazzolo (Norges Bank (Central Bank of Norway) and BI Norwegian Business School); Herman K. van Dijk (Erasmus University Rotterdam, VU University Amsterdam and Tinbergen Institute,)
    Abstract: We introduce a Combined Density Nowcasting (CDN) approach to Dynamic Factor Models (DFM) that in a coherent way accounts for time-varying uncertainty of several model and data features in order to provide more accurate and complete density nowcasts. The combination weights are latent random variables that depend on past nowcasting performance and other learning mechanisms. The combined density scheme is incorporated in a Bayesian Sequential Monte Carlo method which re-balances the set of nowcasted densities in each period using updated information on the time-varying weights. Experiments with simulated data show that CDN works particularly well in a situation of early data releases with relatively large data uncertainty and model incompleteness. Empirical results, based on US real-time data of 120 leading indicators, indicate that CDN gives more accurate density nowcasts of US GDP growth than a model selection strategy and other combination strategies throughout the quarter with relatively large gains for the two rst months of the quarter. CDN also provides informative signals on model incompleteness during recent recessions. Focusing on the tails, CDN delivers probabilities of negative growth, that provide good signals for calling recessions and ending economic slumps in real time.
    Keywords: Density forecast combination, Survey forecast, Bayesian filtering; Sequential Monte Carlo Nowcasting; Real-time data
    JEL: C11 C13 C32 C53 E37
    Date: 2014–12–04
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2014_17&r=ecm
  11. By: Bozena Bobkova (Institute of Economic Studies, Faculty of Social Sciences, Charles University in Prague, Smetanovo nábreží 6, 111 01 Prague 1, Czech Republic)
    Abstract: This article questions the slope homogeneity in a gravity equation and proposes a partially heterogeneous framework for its estimation using panel data. We suggest to employ K-mean clustering to group countries according to the gravity equation variables. Further, the gravity model is estimated on these created homogeneous groups. We apply this procedure on analysis of German trade data and confirm the slope heterogeneity in the model. When we estimate the model on each cluster separately, the estimated coefficients and their standard errors vary sufficiently. Moreover, we show that the pooled estimation technique severely under- or overestimate the effect of given variables.
    Keywords: gravity model, K-mean clustering, Germany, slope heterogeneity
    JEL: C23 C45 F10 F14
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:fau:wpaper:wp2014_37&r=ecm
  12. By: STANKOVA, Marija; MARTENS, David; PROVOST, Foster
    Abstract: Many real-world large datasets correspond to bipartite graph data settings; think for example of users rating movies or people visiting locations. Although some work exists over such bigraphs, no general network-oriented methodology has been proposed yet to perform node classification. In this paper we propose a three-stage classification framework that effectively deals with the typical very large size of such datasets. First, a weighting of the top nodes is defined. Secondly, the bigraph is projected into a unipartite (homogenous) graph among the bottom nodes, where the weights of the edges are a function of the weights of the top nodes in the bigraph. Finally, relational learners/classiers are applied to the resulting weighted unigraph. This general framework allows us to explore the design space, by applying different choices for the three stages, introducing new alternatives and mixing-and-matching to create new techniques. We present an empirical study of the predictive and run-time performances for different combinations of functions in the three stages over a large collection of bipartite datasets. There are clear differences in predictive performance with different design choices. Based on these results, we propose several specic combinations that show good accuracy and also allow for easy and fast scaling to big datasets. A comparison with a linear SVM method on the adjacency matrix of the bigraph shows the superiority of the network-oriented approach.
    Keywords: Bipartite graphs, Two-mode networks, Affiliation networks, Node classification, Big data
    Date: 2015–01
    URL: http://d.repec.org/n?u=RePEc:ant:wpaper:2015001&r=ecm

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.