
on Econometrics 
By:  Babii, Andrii 
Abstract:  This paper provides novel methods for inference in a very general class of illposed models in econometrics, encompassing the nonparametric instrumental regression, different functional regressions, and the deconvolution. I focus on uniform confidence sets for the parameter of interest estimated with Tikhonov regularization, as in Darolles, Fan, Florens, and Renault (2011). I first show that it is not possible to develop inferential methods directly based on the uniform central limit theorem. To circumvent this difficulty I develop two approaches that lead to valid confidence sets. I characterize expected diameters and coverage properties uniformly over a large class of models (i.e. constructed confidence sets are honest). Finally, I illustrate that introduced confidence sets have reasonable width and coverage properties in samples commonly used in applications with Monte Carlo simulations and considering application to Engel curves. 
Keywords:  nonparametric instrumental regression, functional linear regression, density deconvolution, honest uniform confidence sets, nonasymptotic inference, illposed models, Tikhonov regularization 
JEL:  C14 
Date:  2017–05 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:31687&r=ecm 
By:  Morais, Joanna; ThomasAgnan, Christine; Simioni, Michel 
Abstract:  When the aim is to model marketshares as a function of explanatory variables, the marketing literature proposes some regression models which can be qualified as attraction models. They are generally derived from an aggregated version of the multinomial logit model widely used in econometrics for discrete choice modeling. But aggregated multinomial logit models (MNL) and the socalled marketshare models or generalized multiplicative competitive interaction models (GMCI) present some limitations: in their simpler version they do not specify brandspecific and crosseffect parameters. Introducing all possible cross effects is not possible in the MNL and would imply a very large number of parameters in the case of the GMCI. In this paper, we consider alternative models which are the Dirichlet covariate model (DIR) and the compositional model (CODA). DIR allows to introduce brandspecific parameters and CODA allows additionally to consider crosseffect parameters. We show that these last two models can be written in a similar fashion, called attraction form, as the MNL and the GMCI models. As marketshare models are usually interpreted in terms of elasticities, we also use this notion to interpret the DIR and CODA models. We compare the main properties of the models in order to explain why CODA and DIR models can outperform traditional marketshare models. The benefits of highlighting these relationships is on one hand to propose new models to the marketing literature and on the other hand to improve the interpretation of the CODA and DIR models using the elasticities of the econometrics literature. Finally, an application to the automobile market is presented where we model brands marketshares as a function of media investments, controlling for the brands average price and a scrapping incentive dummy variable. We compare the goodnessoffit of the various models in terms of quality measures adapted to shares. 
Keywords:  Multinomial logit; Marketshares models; Compositional data analysis; Dirichlet regression. 
JEL:  C10 C25 C35 C46 D12 M31 
Date:  2017–05 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:31699&r=ecm 
By:  Babii, Andrii; Florens, JeanPierre 
Abstract:  We develop a uniform asymptotic expansion for the empirical distribution function of residuals in the nonparametric IV regression. Such expansion opens a door for construction of a broad range of residualbased specification tests in nonparametric IV models. Building on obtained result, we develop a test for the separability of unobservables in econometric models with endogeneity. The test is based on verifying the independence condition between residuals of the NPIV estimator and the instrument and can distinguish between the nonseparable and the separable specification under endogeneity. 
Keywords:  separability test, distribution of residuals, nonparametric instrumental regression,Sobolev scales 
JEL:  C12 C14 
Date:  2017–05 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:31686&r=ecm 
By:  Iacone, Fabrizio; Leybourne, Stephen J; Taylor, A M Robert 
Abstract:  We develop a test, based on the Lagrange multiplier [LM] testing principle, for the value of the long memory parameter of a univariate time series that is composed of a fractionally integrated shock around a potentially broken deterministic trend. Our proposed test is constructed from data which are detrended allowing for a trend break whose (unknown) location is estimated by a standard residual sum of squares estimator. We demonstrate that the resulting LMtype statistic has a standard limiting null chisquared distribution with one degree of freedom, and attains the same asymptotic local power function as an infeasible LM test based on the true shocks. Our proposed test therefore attains the same asymptotic local optimality properties as an oracle LM test in both the trend break and no trend break environments. Moreover, and unlike conventional unit root and stationarity tests, this asymptotic local power function does not alter between the break and no break cases and so there is no loss in asymptotic local power from allowing for a trend break at an unknown point in the sample, even in the case where no break is present. We also report the results from a Monte Carlo study into the finitesample behaviour of our proposed test. 
Keywords:  Fractional integration; trend break; Lagrange multiplier test; asymptotically locally most powerful test 
Date:  2017–05 
URL:  http://d.repec.org/n?u=RePEc:esy:uefcwp:19654&r=ecm 
By:  Hsu, YuChin; Huber, Martin; Lai, Tsung Chih 
Abstract:  Using a sequential conditional independence assumption, this paper discusses fully nonparametric estimation of natural direct and indirect causal effects in causal mediation analysis based on inverse probability weighting. We propose estimators of the average indirect effect of a binary treatment, which operates through intermediate variables (or mediators) on the causal path between the treatment and the outcome, as well as the unmediated direct effect. In a first step, treatment propensity scores given the mediator and observed covariates or given covariates alone are estimated by nonparametric series logit estimation. In a second step, they are used to reweigh observations in order to estimate the effects of interest. We establish rootn consistency and asymptotic normality of this approach as well as a weighted version thereof. The latter allows evaluating effects on specific subgroups like the treated, for which we derive the asymptotic properties under estimated propensity scores. We also provide a simulation study and an application to an information intervention about male circumcisions. 
Keywords:  causal mechanisms; direct effects; indirect effects; causal channels; mediation analysis; causal pathways; series logit estimation; nonparametric estimation; inverse probability weighting; propensity score 
JEL:  C21 
Date:  2017–05–01 
URL:  http://d.repec.org/n?u=RePEc:fri:fribow:fribow00482&r=ecm 
By:  Ferman, Bruno 
Abstract:  We analyze the properties of matching estimators when the number of treated observations is fixed while the number of treated observations is large. We show that, under standard assumptions, the nearest neighbor matching estimator for the average treatment effect on the treated is asymptotically unbiased, even though this estimator is not consistent. We also provide a test based on the theory of randomization tests under approximate symmetry developed in Canay et al. (2014) that is asymptotically valid when the number of control observations goes to infinity. This is important because large sample inferential techniques developed in Abadie and Imbens (2006) would not be valid in this setting. 
Keywords:  matching estimator, treatment effect, hypothesis testing, randomization inference 
JEL:  C12 C13 C21 
Date:  2017–05–04 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:78940&r=ecm 
By:  John Aston; Florent Autin; Gerda Claeskens; JeanMarc Freyermuth; Christophe Pouet 
Abstract:  We present a novel method for detecting some structural characteristics of multidimensional functions. We consider the multidimensional Gaussian white noise model with an anisotropic estimand. Using the relation between the Sobol decomposition and the geometry of multidimensional wavelet basis we can build test statistics for any of the Sobol functional components. We assess the asymptotical minimax optimality of these test statistics and show that they are optimal in presence of anisotropy with respect to the newly determined minimax rates of separation. An appropriate combination of these test statistics allows to test some general structural characteristics such as the atomic dimension or the presence of some variables. Numerical experiments show the potential of our method for studying spatiotemporal processes. 
Keywords:  Adaptation, Anisotropy, Atomic dimension, Besov spaces, Gaussian noise model, Hyperbolic wavelets, Hypothesis testing, Minimax rate, Sobol decomposition, Structural modeling 
Date:  2017–05 
URL:  http://d.repec.org/n?u=RePEc:ete:kbiper:582277&r=ecm 
By:  Stavros J. Sioutis 
Abstract:  The accuracy of least squares calibration using option premiums and particle filtering of price data to find model parameters is determined. Derivative models using exponential L\'evy processes are calibrated using regularized weighted least squares with respect to the minimal entropy martingale measure. Sequential importance resampling is used for the Bayesian inference problem of time series parameter estimation with proposal distribution determined using extended Kalman filter. The algorithms converge to their respective global optima using a highly parallelizable statistical optimization approach using a grid of initial positions. Each of these methods should produce the same parameters. We investigate this assertion. 
Date:  2017–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1705.04780&r=ecm 
By:  Hiroyuki Kasahara (Vancouver School of Economics, University of British Columbia); Katsumi Shimotsu (Faculty of Economics, The University of Tokyo) 
Abstract:  Testing the number of components in multivariate normal mixture models is a longstanding challenge. This paper develops a likelihoodbased test of the null hypothesis of M 0 components against the alternative hypothesis of M 0 + 1 components. We derive a local quadratic approximation of the likelihood ratio statistic in terms of the polynomials of the parameters. Based on this quadratic approximation, we propose an EM test of the null hypothesis of M 0 components against the alternative hypothesis of M 0 + 1 components, and derive the asymptotic distribution of the proposed test statistic. The simulations show that the proposed test has good finite sample size and power properties. 
Date:  2017–03 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2016cf1044&r=ecm 
By:  Morais, Joanna; ThomasAgnan, Christine; Simioni, Michel 
Abstract:  Regression models have been developed for the case where the dependent variable is a vector of shares. Some of them, from the marketing literature, are easy to interpret but they are quite simple and can only be complexified at the expense of a very large number of parameters to estimate. Other models, from the mathematical literature, are called compositional regression models and are based on the simplicial geometry (a vector of shares is called a composition, shares are components, and a composition lies in the simplex). These models are transformation models: they use a logratio transformation of shares. They are very flexible in terms of explanatory variables and complexity (componentspecific and crosseffect parameters), but their interpretation is not straightforward, due to the fact that shares add up to one. This paper combines both literatures in order to obtain a performing marketshare model allowing to get relevant and appropriate interpretations, which can be used for decision making in practical cases. For example, we are interested in modeling the impact of media investments on automobile manufacturers sales. In order to take into account the competition, we model the brands marketshares as a function of (relative) media investments. We furthermore focus on compositional models where some explanatory variables are also compositional. Two specifications are possible: in Model A, a unique coefficient is associated to each compositional explanatory variable, whereas in Model B a compositional explanatory variable is associated to componentspecific and crosseffect coefficients. Model A and Model B are estimated for our application in the B segment of the French automobile market, from 2003 to 2015. In order to enhance the interpretability of these models, we present different types of impact assessment measures (marginal effects, elasticities and odds ratios) and we show that elasticities are particularly useful to isolate the impact of an explanatory variable on a particular share. We show that elasticities can be equivalently computed from the transformed model and from the model in the simplex and that they are linked to directional Cderivatives of simplexvalued function of a simplex variable. Direct and cross effects of media investments are computed for both models. Model B shows interesting nonsymmetric synergies between brands, and Renault seems to be the most elastic brand to its own media investments. In order to determine if componentspecific and crosseffect parameters are needed to improve the quality of the model (Model B) or if a global parameter is reasonable (Model A), we compare the goodnessoffit of the two models using (outofsample) quality measures adapted for share data. 
Keywords:  Elasticity, odds ratio, marginal effect, compositional model, compositional differential calculus, marketshares, media investments impact 
JEL:  C10 C25 C35 C46 D12 M31 
Date:  2017–05 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:31701&r=ecm 
By:  Ziegel, Johanna F.; Krueger, Fabian; Jordan, Alexander; Fasciati, Fernando 
Abstract:  Motivated by the Basel 3 regulations, recent studies have considered joint forecasts of ValueatRisk and Expected Shortfall. A large family of scoring functions can be used to evaluate forecast performance in this context. However, little intuitive or empirical guidance is currently available, which renders the choice of scoring function awkward in practice. We therefore develop graphical checks (Murphy diagrams) of whether one forecast method dominates another under a relevant class of scoring functions, and propose an associated hypothesis test. We illustrate these tools with simulation examples and an empirical analysis of S&P 500 and DAX returns. 
Date:  2017–05–12 
URL:  http://d.repec.org/n?u=RePEc:awi:wpaper:0632&r=ecm 
By:  Galeano San Miguel, Pedro; Ausín Olivera, María Concepción; Nguyen, Hoang 
Abstract:  Copula densities are widely used to model the dependence structure of financial time series. However, the number of parameters involved becomes explosive in high dimensions which results in most of the models in the literature being static. Factor copula models have been recently proposed for tackling the curse of dimensionality by describing the behaviour of return series in terms of a few common latent factors. To account for asymmetric dependence in extreme events, we propose a class of dynamic one factor copula where the factor loadings are modelled as generalized autoregressive score (GAS) processes. We perform Bayesian inference in different specifications of the proposed class of dynamic one factor copula models. Conditioning on the latent factor, the components of the return series become independent, which allows the algorithm to run in a parallel setting and to reduce the computational cost needed to obtain the conditional posterior distributions of model parameters. We illustrate our approach with the analysis of a simulated data set and the analysis of the returns of 150 companies listed in the S&P500 index. 
Keywords:  Parallel estimation; Generalized hyperbolic skew Studentt copula; GAS model; Factor copula models; Bayesian inference 
Date:  2017–05 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:24552&r=ecm 
By:  JIN SEO CHO (Yonsei University); HALBERT WHITE (University of California, San Diego) 
Abstract:  The current paper examines the limit distribution of the quasimaximum likelihood estimator obtained from a directionally differentiable quasilikelihood function and represents its limit distribution as a functional of a Gaussian stochastic process indexed by direction. In this way, the standard analysis that assumes a differentiable quasilikelihood function is treated as a special case of our analysis. We also examine and redefine the standard quasilikelihood ratio, Wald, and Lagrange multiplier test statistics so that their null limit behaviors are regular under our model framework. 
Keywords:  directionally differentiable quasilikelihood function, Gaussian stochastic process, quasilikelihood ratio test, Wald test, and Lagrange multiplier test statistics. 
JEL:  C12 C13 C22 C32 
Date:  2017–04 
URL:  http://d.repec.org/n?u=RePEc:yon:wpaper:2017rwp103&r=ecm 
By:  JIN SEO CHO (Yonsei University); HALBERT WHITE (University of California, San Diego) 
Abstract:  We illustrate analyzing directionally differentiable econometric models and provide technical details which are not included in Cho and White (2017). 
Keywords:  directionally differentiable quasilikelihood function, Gaussian stochastic process, quasilikelihood ratio test, Wald test, and Lagrange multiplier test statistics, stochastic frontier production function, GMM estimation, BoxCox transform. 
JEL:  C12 C13 C22 C32 
Date:  2017–04 
URL:  http://d.repec.org/n?u=RePEc:yon:wpaper:2017rwp103a&r=ecm 
By:  Thomas Gueuning; Gerda Claeskens 
Abstract:  The focused information criterion for model selection is constructed to select the model that best estimates a particular quantity of interest, the focus, in terms of mean squared error. We extend this focused selection process to the highdimensional regression setting with potentially a larger number of parameters than the size of the sample. We distinguish two cases: (i) the case where the considered submodel is of lowdimension and (ii) the case where it is of highdimension. In the former case, we obtain an alternative expression of the lowdimensional focused information criterion that can directly be applied. In the latter case we use a desparsified estimator that allows us to derive the mean squared error of the focus estimator. We illustrate the performance of the highdimensional focused information criterion with a numerical study and a real dataset. 
Keywords:  Desparsified estimator, Focused information criterion, Highdimensional data, Variable selection 
Date:  2017–05 
URL:  http://d.repec.org/n?u=RePEc:ete:kbiper:582649&r=ecm 
By:  Johanna F. Ziegel; Fabian Kr\"uger; Alexander Jordan; Fernando Fasciati 
Abstract:  Motivated by the Basel 3 regulations, recent studies have considered joint forecasts of ValueatRisk and Expected Shortfall. A large family of scoring functions can be used to evaluate forecast performance in this context. However, little intuitive or empirical guidance is currently available, which renders the choice of scoring function awkward in practice. We therefore develop graphical checks (Murphy diagrams) of whether one forecast method dominates another under a relevant class of scoring functions, and propose an associated hypothesis test. We illustrate these tools with simulation examples and an empirical analysis of S&P 500 and DAX returns. 
Date:  2017–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1705.04537&r=ecm 
By:  Aurélien Poissonnier 
Abstract:  Structural gravity models for trade stem from agnostic models of bilateral trade flows. Although more theoretically sound, they are much more complex to estimate. This difficulty is due to the multilateral resistance terms which account for the general equilibrium constraints of global trade and must be inferred from the rest of the model. In the present paper, I show that solving for these terms explicitly is a valid econometric approach for gravity models, including in panel data. I propose iterative solutions in Stata based on three different techniques. An example of these solutions on real data is presented. The results from this test confirm the necessity to account for the multilateral resistance terms in the estimation and raise some questions on the alternative solution using dummies. 
JEL:  C13 F14 
Date:  2016–12 
URL:  http://d.repec.org/n?u=RePEc:euf:dispap:040&r=ecm 
By:  Li, Zhiyong; Lambe, Brendan; Adegbite, Emmanuel 
Abstract:  In this paper, we introduce two low frequency bidask spread estimators using daily high and low transaction prices. The range of midprices is an increasing function of the sampling interval, while the bidask spread and the relationship between trading direction and the midprice are not constrained by it and are therefore independent. Monte Carlo simulations and data analysis from the equity and foreign exchange markets demonstrate that these models significantly outperform the most widely used lowfrequency estimators, such as those proposed in Corwin and Schultz (2012) and most recently in Abdi and Ranaldo (2017). We illustrate how our models can be applied to deduce historical market liquidity in NYSE, UK, Hong Kong and the Thai stock markets. Our estimator can also effectively act as a gauge for market volatility and as a measure of liquidity risk in asset pricing. 
Keywords:  Highlow spread estimator; effective spread; transaction cost; market liquidity 
JEL:  C02 C13 C15 
Date:  2017–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:79102&r=ecm 
By:  Chris J. Skinner; Jon Wakefield 
Abstract:  We give a brief overview of common sampling designs used in a survey setting, and introduce the principal inferential paradigms under which data from complex surveys may be analyzed. In particular, we distinguish between designbased, modelbased and modelassisted approaches. Simple examples highlight the key differences between the approaches. We discuss the interplay between inferential approaches and targets of inference and the important issue of variance estimation. 
Keywords:  Designbased inference; modelassisted inference; modelbased inference; weights; variance estimation. 
JEL:  C1 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:76991&r=ecm 