
on Econometrics 
By:  Panchenko, Valentyn; Prokhorov, Artem 
Abstract:  We consider a general multivariate model where univariate marginal distributions are known up to a common parameter vector and we are interested in estimating that vector without assuming anything about the joint distribution, except for the marginals. If we assume independence between the marginals and maximize the resulting quasilikelihood, we obtain a consistent but inefficient estimate. If we assume a parametric copula (other than independence) we obtain a full MLE, which is efficient but only under correct copula specification and badly biased if the copula is misspecified. Instead we propose a sieve MLE estimator which improves over OMLE but does not suffer the drawbacks of the full MLE. We model the unknown part of the joint distribution using the BernsteinKantorovich polynomial copula and assess the resulting improvement over QMLE and over misspecified FMLE in terms of relative efficiency and robustness. We derive the asymptotic distribution of the new estimator and show that it reaches the semiparametric efficiency bound. Simulations suggest that the sieve MLE can be almost as efficient as FMLE relative to QMLE provided there is enough dependence between the marginals. An application using insurance company loss and expense data demonstrates empirical relevance of the estimator. 
Keywords:  sieve MLE; copula; semiparametric efficiency; 
Date:  2016–03–18 
URL:  http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/14641&r=ecm 
By:  Koutchade, Philippe; Carpentier, Alain; Femenia, Fabienne 
Abstract:  To account for the effects of heterogeneity in microeconometric models has been major concern in labor economics, empirical industrial organization or trade economics for at least two decades. The microeconometric agricultural production choice models found in the literature largely ignore the impacts of unobserved heterogeneity. This can partly be explained by the dimension of these models which deals with a large set of choices, e.g., acreage choices, input demands and yield supplies. We propose a random parameter framework to account for the unobserved heterogeneity in microeconometric agricultural production choices models. This approach allows accounting for unobserved farms’ and farmers’ heterogeneity in a fairly flexible way. We estimate a system of yield supply and acreage choice equations with a panel set of French crop growers. Our results show that heterogeneity significantly matters in our empirical application and that ignoring the heterogeneity of farmers’ choice processes can have important impacts on simulation outcomes. Due to the dimension of the estimation problem and the functional form of the considered production choice model, the Simulated Maximum Likelihood approach usually considered in the applied econometrics literature in such context is empirically intractable. We show that specific versions of the Stochastic ExpectationMaximization algorithms proposed in the statistics literature are easily implemented for estimating random parameter agricultural production models. 
Keywords:  Heterogeneity, random parameter models, agricultural production choices, Agricultural and Food Policy, Q12, C13, C15, 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:ags:iaae15:212015&r=ecm 
By:  Matias Heikkila; Yves Dominicy; Sirkku Pauliina Ilmonen 
Abstract:  Modeling and understanding multivariate extreme events is challenging, but of great importance invarious applications— e.g. in biostatistics, climatology, and finance. The separating Hill estimator canbe used in estimating the extreme value index of a heavy tailed multivariate elliptical distribution. Weconsider the asymptotic behavior of the separating Hill estimator under estimated location and scatter.The asymptotic properties of the separating Hill estimator are known under elliptical distribution withknown location and scatter. However, the effect of estimation of the location and scatter has previouslybeen examined only in a simulation study. We show, analytically, that the separating Hill estimator isconsistent and asymptotically normal under estimated location and scatter, when certain mild conditionsare met. 
Keywords:  extreme value theory; hill estimator; multivariate Analysis 
Date:  2015–12 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/221563&r=ecm 
By:  Till Weigt; Bernd Wilfling 
Abstract:  This paper formally establishes a new forecast combination approach, which is based on VAR modeling of the forecast errors resulting from alternative forecast models. We apply our approach to volatility forecasting by combining several structural time series models with implied volatility. Using a multicurrency data set, we conduct insample and outofsample forecasting analyses in order (a) to demonstrate the statistical significance of our approach, and (b) to assess its forecasting superiority over alternative forecasting models and combinations. 
Keywords:  Forecast combination, volatility forecasting, realized volatility, implied volatility, exchange rates 
JEL:  C53 G17 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:cqe:wpaper:4616&r=ecm 
By:  Miranda, Karen; Martínez Ibáñez, Oscar; Manjón Antolín, Miquel Carlos, 
Abstract:  Individualspecific effects and their spatial spillovers are generally not identified in linear panel data models. In this paper we present identification conditions under the assumption that covariates are correlated with the individualspecific effects. We also derive appropriate GLS and IV estimators for the resulting correlated random effects spatial panel data model with strictlyexogenous and predetermined explanatory variables, respectively. Lastly, we illustrate the proposed estimators using a CobbDouglas production function specification and US statelevel data from Munnell (1990). As in previous studies, we find no evidence of public capital spillovers. However, the public capital does play a role in the positive spatial contagion of the nevertheless negative spillovers that states produce in and receive from their neighbours. Keywords: correlated random effects, spatial panel data. JEL Classification: C23 
Keywords:  Anàlisi de dades de panel, 33  Economia, 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:urv:wpaper:2072/261028&r=ecm 
By:  Antti Tanskanen; Jani Lukkarinen; Kari Vatanen 
Abstract:  Factor models are commonly used in financial applications to analyze portfolio risk and to decompose it to loadings of risk factors. A linear factor model often depends on a small number of carefullychosen factors and it has been assumed that an arbitrary selection of factors does not yield a feasible factor model. We develop a statistical factor model, the random factor model, in which factors are chosen at random based on the random projection method. Random selection of factors has the important consequence that the factors are almost orthogonal with respect to each other. The developed random factor model is expected to preserve covariance between timeseries. We derive probabilistic bounds for the accuracy of the random factor representation of timeseries, their crosscorrelations and covariances. As an application of the random factor model, we analyze reproduction of correlation coefficients in the welldiversified Russell 3,000 equity index using the random factor model. Comparison with the principal component analysis (PCA) shows that the random factor model requires significantly fewer factors to provide an equally accurate reproduction of correlation coefficients. This occurs despite the finding that PCA reproduces single equity return timeseries more faithfully than the random factor model. Accuracy of a random factor model is not very sensitive to which particular set of randomlychosen factors is used. A more general kind of universality of random factor models is also present: it does not much matter which particular method is used to construct the random factor model, accuracy of the resulting factor model is almost identical. 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1604.05896&r=ecm 
By:  Dimitrios P. Louzis (Bank of Greece) 
Abstract:  This article proposes methods for estimating a Bayesian vector autoregression (VAR) model with an informative steady state prior which also accounts for possible structural changes in the longterm trend of the macroeconomic variables. I show that, overall, the proposed timevarying steady state VAR model can lead to superior point and density macroeconomic forecasting compared to constant steady state VAR specifications. 
Keywords:  Steady states; timevarying parameters; macroeconomic forecasting 
JEL:  C32 
Date:  2016–03 
URL:  http://d.repec.org/n?u=RePEc:bog:wpaper:204&r=ecm 
By:  Coughlin, Cletus C. (Federal Reserve Bank of St. Louis); Novy, Dennis (University of Warwick, UK) 
Abstract:  Trade data are typically reported at the level of regions or countries and are therefore aggregates across space. In this paper, we investigate the sensitivity of standard gravity estimation to spatial aggregation. We build a model in which initially symmetric micro regions are combined to form aggregated macro regions. We then apply the model to the large literature on border effects in domestic and international trade. Our theory shows that larger countries are systematically associated with smaller border effects. The reason is that due to spatial frictions, aggregation across space increases the relative cost of trading within borders. The cost of trading across borders therefore appears relatively smaller. This mechanism leads to border effect heterogeneity and is independent of multilateral resistance effects in general equilibrium. Even if no border frictions exist at the micro level, gravity estimation on aggregate data can still produce large border effects. We test our theory on domestic and international trade flows at the level of U.S. states. Our results confirm the model’s predictions, with quantitatively large effects. 
Keywords:  Gravity; Geography; Borders; Trade Costs; Heterogeneity; Home Bias; Spatial Attenuation; Modifiable Areal Unit Problem (MAUP) 
JEL:  F10 F15 R12 
Date:  2016–04–01 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2016006&r=ecm 
By:  Audrey Laporte; Adrian Rohit Dass; Brian Ferguson 
Keywords:  rational addiction model, dynamic time series, dynamic oanel 
JEL:  I12 C22 C23 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:cch:wpaper:160004&r=ecm 
By:  Li, Hui 
Abstract:  The strength of dependence between random variables is an important property that is useful in a lot of areas. Various measures have been proposed which detect mostly divergence from independence. However, a true measure of dependence should also be able to characterize complete dependence where one variable is a function of the other. Previous measures are mostly symmetric which are shown to be insufficient to capture complete dependence. A new type of nonsymmetric dependence measure is presented that can unambiguously identify both independence and complete dependence. The original Rényi’s axioms for symmetric measures are reviewed and modified for nonsymmetric measures. 
Keywords:  Nonsymmetric dependence measure, complete dependence, ∗ product on copula, Data Processing Inequality (DPI) 
JEL:  C02 
Date:  2016–02–26 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:69735&r=ecm 
By:  Javier Hidalgo; Myung Hwan Seo 
Abstract:  We consider an omnibus test for the correct specification of the dynamics of a sequence S0266466614000310_inline1 in a lattice. As it happens with causal models and d = 1, its asymptotic distribution is not pivotal and depends on the estimator of the unknown parameters of the model under the null hypothesis. One first main goal of the paper is to provide a transformation to obtain an asymptotic distribution that is free of nuisance parameters. Secondly, we propose a bootstrap analog of the transformation and show its validity. Thirdly, we discuss the results when S0266466614000310_inline2 are the errors of a parametric regression model. As a by product, we also discuss the asymptotic normality of the least squares estimator of the parameters of the regression model under very mild conditions. Finally, we present a small Monte Carlo experiment to shed some light on the finite sample behavior of our test. 
JEL:  C21 C23 
Date:  2015–04 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:66104&r=ecm 
By:  David T. Frazierz; Éric Renault 
Abstract:  The standard description of twostep extremum estimation amounts to pluggingin a firststep estimator of nuisance parameters to simplify the optimization problem and then deducing a user friendly, but potentially inefficient, estimator for the parameters of interest. In this paper, we consider a more general setting of twostep estimation where we do not necessarily have “nuisance parameters” but rather awkward occurrences of the parameters of interest. The efficiency problem associated with twostep estimators in this context is more difficult than with standard nuisance parameters as even if the true unknown value of the parameters were pluggedin to alleviate the awkward occurrences of the parameters, the resulting secondstep estimator may not be efficient. In addition, standard approaches to restore efficiency for twostep procedures may not work due to a consistency issue. To alleviate this potential issue, we propose a new computationally simple twostep estimation procedure that relies on targeting and penalized to enforce consistency, with the secondstep estimators maintaining asymptotic efficiency. We compare this new method with existing iterative methods in the framework of copula models and asset pricing models. Simulation results illustrate that this new method performs better than existing iterative procedures and is (nearly) computationally equivalent. 
Keywords:  Targeting, Penalization, Multivariate Time Series Models, Asset Pricing, 
Date:  2016–04–08 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2016s16&r=ecm 