nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒06‒18
fourteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Validity of Wild Bootstrap Inference with Clustered Errors By Antoine Djogbenou; James G. MacKinnon; Morten Ørregaard Nielsen
  2. Inference with Correlated Clusters By Powell, David
  3. Composite Quasi-Maximum Likelihood Estimation of Dynamic Panels with Group-Specific Heterogeneity and Spatially Dependent Errors By Chu, Ba
  4. Pseudo-Maximum Likelihood and Lie Groups of Linear Transformations By Gouriéroux, Christian; Monfort, Alain; Zakoian, Jean-Michel
  5. Patent Licensing, Entry and the Incentive to Innovate By Samuele Centorrino; Frédérique Fève; Jean-Pierre Florens
  6. Comparing Cross-Country Estimates of Lorenz Curves Using a Dirichlet Distribution Across Estimators and Datasets By Andrew C. Chang; Phillip Li; Shawn M. Martin
  7. On periodic ergodicity of a general periodic mixed Poisson autoregression By Aknouche, Abdelhakim; Bentarzi, Wissam; Demouche, Nacer
  8. Threshold cointegration and adaptive shrinkage By Huber, Florian; Zörner, Thomas
  9. Realized Stochastic Volatility with General Asymmetry and Long Memory By Asai, M.; Chang, C-L.; McAleer, M.J.
  10. Analysis of order book flows using a nonparametric estimation of the branching ratio matrix By Massil Achab; Emmanuel Bacry; Jean-Fran\c{c}ois Muzy; Marcello Rambaldi
  11. Stock Trading Using PE ratio: A Dynamic Bayesian Network Modeling on Behavioral Finance and Fundamental Investment By Haizhen Wang; Ratthachat Chatpatanasiri; Pairote Sattayatham
  12. Discrete Choice with Presentation Effects By Breitmoser, Yves
  13. Estimation of the Two-Tiered Stochastic Frontier Model with the Scaling Property By Christopher F. Parmeter
  14. A general inversion theorem for cointegration By Massimo Franchi; Paolo Paruolo

  1. By: Antoine Djogbenou (Queen's University); James G. MacKinnon (Queen's University); Morten Ørregaard Nielsen (Queen's University)
    Abstract: We study asymptotic inference based on cluster-robust variance estimators for regression models with clustered errors, focusing on the wild cluster bootstrap and the ordinary wild bootstrap. We state conditions under which both asymptotic and bootstrap tests and confidence intervals will be asymptotically valid. These conditions put limits on the rates at which the cluster sizes can increase as the number of clusters tends to infinity. To include power in the analysis, we allow the data to be generated under sequences of local alternatives. Simulation experiments illustrate the theoretical results and show that all methods can work poorly in certain cases.
    Keywords: clustered data, cluster-robust variance estimator, CRVE, inference, wild bootstrap, wild cluster bootstrap
    JEL: C15 C21 C23
    Date: 2017–06
  2. By: Powell, David
    Abstract: This paper introduces a method which permits valid inference given a finite number of heterogeneous, correlated clusters. Many inference methods assume clusters are asymptotically independent or model dependence across clusters as a function of a distance metric. With panel data, these restrictions are unnecessary. This paper relies on a test statistic using the mean of the cluster-specific scores normalized by the variance and simulating the distribution of this statistic. To account for cross-cluster dependence, the relationship between each cluster is estimated, permitting the independent component of each cluster to be isolated. The method is simple to implement, can be employed for linear and nonlinear estimators, places no restrictions on the strength of the correlations across clusters, and does not require prior knowledge of which clusters are correlated or even the existence of independent clusters. In simulations, the procedure rejects at the appropriate rate even in the presence of highly-correlated clusters.
    Keywords: Finite Inference, Correlated Clusters, Fixed Effects, Panel Data, Hypothesis Testing, Score Statistic
    JEL: C12 C21 C23 C33
    Date: 2017–05
  3. By: Chu, Ba
    Abstract: This paper proposes a new method to estimate dynamic panel data models with spatially dependent errors that allows for known/unknown group-specific patterns of slope heterogeneity. Analysis of this model is conducted in the framework of composite quasi-likelihood (CL) maximization. The proposed CL estimator is robust against some misspecification of the unobserved individual/group-specific fixed effects. Since our CL method is based on the idea of doing regressions involving common-group stochastic trends, no endogeneity problem will arise. Therefore, unlike existing methods the proposed estimator does not require the use of intrumental variables nor bias correction/reduction. Clustering and estimation of the parameters of interest involve a large-scale non-convex mixed-integer programming problem, which can then be solved via a new efficient approach developed based on DC (Difference-of-Convex functions) programming and the DCA (DC algorithm). Suppose that the number of time periods and the size of spatial domain grow simultaneously, asymptotic theory is derived for both cases where the covariates are stationary and nonstationary. An extensive Monte Carlo simulation is also provided to examine the finite-sample performance of the proposed estimator. Our method is then applied to study the long-run relationship between saving and investment rates. The empirical findings reconcile various empirical approaches to capital mobility in the literature; and there exists substantial capital mobility in some countries while no conclusion about capital mobility can be drawn in other countries. Applied economists can easily implement the method by using the companion software to this paper.
    Keywords: Large dynamic panels, spatial data, group-specific heterogeneity, clustering, asymptotics, large-scale non-convex mixed-integer program, difference of convex (d.c.) functions, DCA, Variable Neighborhood Search (VNS)
    JEL: C31 C33 C38 C55
    Date: 2017
  4. By: Gouriéroux, Christian; Monfort, Alain; Zakoian, Jean-Michel
    Abstract: Newey, Steigerwald (1997) considered a univariate conditionally heteroscedastic model, with independent and identically distributed errors. They showed that the parameters characterizing the serial dependence are consistently estimated by any pseudo maximum likelihood approach, whenever two additional parameters, one for location, one for scale, are appropriately introduced in the model. Our paper extends their result to a more general multivariate framework. We show the consistency of any pseudo maximum likelihood method for multivariate models based on Lie groups of (linear, affine) transformations when these groups commute, or at least satisfy a property of closure under commutation. We explain how to introduce appropriately the additional parameters which capture all the bias due to the misspecification of the error distribution. We also derive the asymptotic distribution of the PML estimators.
    Keywords: Pseudo Maximum Likelihood, Lie Group, Transformation Model, GARCH Model, Infinitesimal Generator, Rotation, Computer Vision, Machine Learning, Volatility Matrices.
    JEL: C1 C13 C51
    Date: 2017–06–09
  5. By: Samuele Centorrino; Frédérique Fève; Jean-Pierre Florens
    Abstract: We present a review on the implementation of regularization methods for the estimation of additive nonparametric regression models with instrumental variables. We consider various versions of Tikhonov, Landweber-Fridman and Sieve (Petrov-Galerkin) regularization. We review data-driven techniques for the sequential choice of the smoothing and the regularization parameters. Through Monte-Carlo simulations, we discuss the finite sample properties of each regularization method for different smoothness properties of the regression function. Finally, we present an application to the estimation of the Engel curve for food in a sample of rural households in Pakistan, where a partially linear specification is described that allows one to embed other exogenous covariates.
    Date: 2017
  6. By: Andrew C. Chang; Phillip Li; Shawn M. Martin
    Abstract: Chotikapanich and Griffiths (2002) introduced the Dirichlet distribution to the estimation of Lorenz curves. This distribution naturally accommodates the proportional nature of income share data and the dependence structure between the shares. Chotikapanich and Griffiths (2002) fit a family of five Lorenz curves to one year of Swedish and Brazilian income share data using unconstrained maximum likelihood and unconstrained non-linear least squares. We attempt to replicate the authors' results and extend their analyses using both constrained estimation techniques and five additional years of data. We successfully replicate a majority of the authors' results and find that some of their main qualitative conclusions also hold using our constrained estimators and additional data.
    Keywords: Constrained Estimation, Dirichlet Distribution, Gini Coefficient, Income Distribution, Lorenz Curve, Replication, Share Data
    JEL: C24 C51 C87 D31
    Date: 2017–06
  7. By: Aknouche, Abdelhakim; Bentarzi, Wissam; Demouche, Nacer
    Abstract: We propose a general class of non-linear mixed Poisson autoregressions whose form and parameters are periodic over time. Under a periodic contraction condition on the forms of the conditional mean, we show the existence of a unique nonanticipative solution to the model, which is strictly periodically stationary, periodically ergodic and periodically weakly dependent having in the pure Poisson case finite higher-order moments. Applications to some well-known integer-valued time series models are considered.
    Keywords: Periodic mixed Poisson autoregression, periodic INGARCH models, non-linear INGARCH models, weak dependence, strict periodic stationarity, periodic ergodicity, periodic contraction condition.
    JEL: C10 C19 C51 C62
    Date: 2017–02–01
  8. By: Huber, Florian; Zörner, Thomas
    Abstract: This paper considers Bayesian estimation of the threshold vector error correction (TVECM) model in moderate to large dimensions. Using the lagged cointegrating error as a threshold variable gives rise to additional difficulties that are typically solved by relying on large sample approximations. Relying on Markov chain Monte Carlo methods we circumvent these issues by avoiding computationally prohibitive estimation strategies like the grid search. Due to the proliferation of parameters we use novel global-local shrinkage priors in the spirit of Griffin and Brown (2010). We illustrate the merits of our approach in an application to five exchange rates vis-á-vis the US dollar and assess whether a given currency is over or undervalued. Moreover, we perform a forecasting comparison to investigate whether it pays off to adopt a non-linear modeling approach relative to a set of simpler benchmark models.
    Keywords: non-linear modeling, shrinkage priors, multivariate cointegration, exchange rate modeling
    Date: 2017–06
  9. By: Asai, M.; Chang, C-L.; McAleer, M.J.
    Abstract: The paper develops a novel realized stochastic volatility model of asset returns and realized volatility that incorporates general asymmetry and long memory (hereafter the RSV-GALM model). The contribution of the paper ties in with Robert Basmann’s seminal work in terms of the estimation of highly non-linear model specifications (“Causality tests and observationally equivalent representations of econometric models”, Journal of Econometrics, 1988), especially for specifying causal effects from returns to future volatility. This paper discusses asymptotic results of a Whittle likelihood estimator for the RSV-GALM model and a test for general asymmetry, and analyses the finite sample properties. The paper also develops an approach to obtain volatility estimates and out-of-sample forecasts. Using high frequency data for three US financial assets, the new model is estimated and evaluated. The paper compares the forecasting performance of the new model with a realized conditional volatility model.
    Keywords: Stochastic Volatility, Realized Measure, Long Memory, Asymmetry, Whittle likelihood, Asymptotic Distribution
    JEL: C13 C22
    Date: 2017–04–01
  10. By: Massil Achab; Emmanuel Bacry; Jean-Fran\c{c}ois Muzy; Marcello Rambaldi
    Abstract: We introduce a new non parametric method that allows for a direct, fast and efficient estimation of the matrix of kernel norms of a multivariate Hawkes process, also called branching ratio matrix. We demonstrate the capabilities of this method by applying it to high-frequency order book data from the EUREX exchange. We show that it is able to uncover (or recover) various relationships between all the first level order book events associated with some asset when mapped to a 12-dimensional process. We then scale up the model so as to account for events on two assets simultaneously and we discuss the joint high-frequency dynamics.
    Date: 2017–06
  11. By: Haizhen Wang; Ratthachat Chatpatanasiri; Pairote Sattayatham
    Abstract: On a daily investment decision in a security market, the price earnings (PE) ratio is one of the most widely applied methods being used as a firm valuation tool by investment experts. Unfortunately, recent academic developments in financial econometrics and machine learning rarely look at this tool. In practice, fundamental PE ratios are often estimated only by subjective expert opinions. The purpose of this research is to formalize a process of fundamental PE estimation by employing advanced dynamic Bayesian network (DBN) methodology. The estimated PE ratio from our model can be used either as a information support for an expert to make investment decisions, or as an automatic trading system illustrated in experiments. Forward-backward inference and EM parameter estimation algorithms are derived with respect to the proposed DBN structure. Unlike existing works in literatures, the economic interpretation of our DBN model is well-justified by behavioral finance evidences of volatility. A simple but practical trading strategy is invented based on the result of Bayesian inference. Extensive experiments show that our trading strategy equipped with the inferenced PE ratios consistently outperforms standard investment benchmarks.
    Date: 2017–05
  12. By: Breitmoser, Yves (Humboldt University Berlin)
    Abstract: Experimenters have to make theoretically irrelevant decisions concerning user interfaces and ordering or labeling of options. Such presentation decisions affect behavior and cause results to appear contradictory across experiments, obstructing utility estimation and policy recommendations. The present paper derives a model of choice allowing analysts to control for both presentation effects and stochastic errors in econometric analyses. I test the model in a comprehensive re-analysis of dictator game experiments. Controlling for presentation effects, preference estimates are consistent across experiments and predictive out-of-sample, highlighting the fundamental role of presentation for choice, and this notwithstanding the possibility of reliable estimation and prediction.
    Keywords: discrete choice; presentation effects; utility estimation; counterfactual predictions; laboratory experiment;
    JEL: C10 C90
    Date: 2017–06–06
  13. By: Christopher F. Parmeter (University of Miami)
    Abstract: The two-tiered stochastic frontier model has enjoyed success across a range of application domains where it is believed that incomplete information on both sides of the market leads to surplus which buyers and sellers can extract. Currently, this model is hindered by the fact that estimation relies on very restrictive distributional assumptions on the behavior of incomplete information on both sides of the market. However, this reliance on specific parametric distributional assumptions can be eschewed if the scaling property is invoked. The scaling property has been well studied in the stochastic frontier literature, but as of yet, has not been used in the two-tier frontier setting.
    Keywords: Incomplete Information, Nonlinear Least Squares, Heteroskedasticity. Publication Status: Under Review
    JEL: C13
    Date: 2017–05–22
  14. By: Massimo Franchi ("Sapienza" University of Rome); Paolo Paruolo (European Commission, Joint Research Centre)
    Abstract: A generalization of the Granger and the Johansen Representation Theorems valid for any (possibly fractional) order of integration is presented. This is based on an inversion theorem that characterizes the order of the pole and the coefficients of the Laurent series representation of the inverse of a matrix function around a singular point. Explicit expressions of the matrix coecients of the (polynomial) cointegrating relations, of the common trends and of the triangular representations are provided, either starting from the Moving Average or the Auto Regressive form. This unifies the different approaches in the literature, and extends them to an arbitrary order of integration.
    Keywords: Cointegration, Common Trends, Triangular representation,Local Smith form, Moving Average representation, Autoregressive representation.
    JEL: C12 C33 C55
    Date: 2017–06

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.