
on Econometrics 
By:  Anna Gloria Billé (Free University of BozenBolzano, Faculty of Economics and Management); Samantha Leorato (University of Rome Tor Vergata, Department of Economics and Finance) 
Abstract:  In this paper we propose a PartialMLE for a general spatial nonlinear probit model, i.e. SARAR(1,1)probit, defined through a SARAR(1,1) latent linear model. This model encompasses the SAE(1)probit model, considered by Wang et al. (2013), and the more interesting SAR(1)probit model. We perform a complete asymptotic analysis, and account for the possible finite sum approximation of the covariance matrix (QuasiMLE) to speed the computation. Moreover, we address the issue of the choice of the groups (couples, in our case) by proposing an algorithm based on a minimum KLdivergence problem. Finally, we provide appropriate definitions of marginal effects for this setting. Finite sample properties of the estimator are studied through a simulation exercise and a real data application. In our simulations, we also consider both sparse and dense matrices for the specification of the true spatial models, and cases of model misspecifications due to different assumed weighting matrices. 
Keywords:  spatial autoregressiveregressive probit model, nonlinear modeling, SARAR, partial maximum likelihood, quasi maximum likelihood, marginal effects 
JEL:  C13 C31 C35 C51 
Date:  2017–12 
URL:  http://d.repec.org/n?u=RePEc:bzn:wpaper:bemps44&r=ecm 
By:  Helmut Lütkepohl; Tomasz Woźniak 
Abstract:  In order to identify structural shocks that affect economic variables, restrictions need to be imposed on the parameters of structural vector autoregressive (SVAR) models. Economic theory is the primary source of such restrictions. However, only overidentifying restrictions can be tested with statistical methods which limits the statistical validation of many justidentified SVAR models. In this study, Bayesian inference is developed for SVAR models in which the structural parameters are identified via Markovswitching heteroskedasticity. In such a model, restrictions that are justidentifying in the homoskedastic case, become overidentifying and can be tested. A set of parametric restrictions is derived under which the structural matrix is globally identified and a SavageDickey density ratio is used to assess the validity of the identification conditions. For that purpose, a new probability distribution is defined that generalizes the beta, F, and compound gamma distributions. As an empirical example, monetary models are compared using heteroskedasticity as an additional device for identification. The empirical results support models with money in the interest rate reaction function. 
Keywords:  Identification through heteroskedasticity, MarkovSwitching models, SavageDickey Density Ratio, monetary policy shocks, Divisia Money 
JEL:  C11 C12 C32 E32 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1707&r=ecm 
By:  Timothy B. Armstrong (Cowles Foundation, Yale University); Michal Kolesár (Princeton University) 
Abstract:  We consider estimation and inference on average treatment effects under unconfoundedness conditional on the realizations of the treatment variable and covariates. We derive finitesample optimal estimators and confidence intervals (CIs) under the assumption of normal errors when the conditional mean of the outcome variable is constrained only by nonparametric smoothness and/or shape restrictions. When the conditional mean is restricted to be Lipschitz with a large enough bound on the Lipschitz constant, we show that the optimal estimator reduces to a matching estimator with the number of matches set to one. In contrast to conventional CIs, our CIs use a larger critical value that explicitly takes into account the potential bias of the estimator. It is needed for correct coverage in finite samples and, in certain cases, asymptotically. We give conditions under which rootn inference is impossible, and we provide versions of our CIs that are feasible and asymptotically valid with unknown error distribution, including in this nonregular case. We apply our results in a numerical illustration and in an application to the National Supported Work Demonstration. 
Keywords:  Semiparametric estimation, Relative efficiency, Matching estimators, Treatment effects 
JEL:  C14 
Date:  2017–12 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:3015&r=ecm 
By:  Pierre Chausse (Department of Economics, University of Waterloo); George Luta (Department of Biostatistics, Bioinformatics and Biomathematics, Georgetown University) 
Abstract:  In this paper, we propose a one step method for estimating the average treatment effect, when the assignment to treatment is not random. We use a misspecified generalized empirical likelihood setup in which we constrain the sample to be balanced. We show that the implied probabilities that we obtain play a similar role as the weights from the weighting methods based on the propensity score. In Monte Carlo simulations, we show that GEL dominates many existing methods in terms of bias and root mean squared errors. We then apply our method to the training program studied by Lalonde (1986). 
JEL:  C21 C13 J01 
Date:  2017–12 
URL:  http://d.repec.org/n?u=RePEc:wat:wpaper:1707&r=ecm 
By:  Kufenko, Vadmin; Prettner, Klaus 
Abstract:  We assess the bias and the efficiency of stateoftheart dynamic panel data estimators by means of modelbased Monte Carlo simulations. The underlying datagenerating process consists of a standard theoretical growth model of income convergence based on capital accumulation. While we impose a true underlying speed of convergence of around 5% in our simulated data, the results obtained with the different panel data estimators range from 0.03% to 17%. This implies a range of the half life of a given income gap from 4 years up to several hundred years. In terms of the squared percent error, the pooled OLS, fixed effects, random effects, and difference GMM estimators perform worst, while the system GMM estimator with the full matrix of instruments and the corrected least squares dummy variable (LSDVC) estimator perform best relative to the other methods under consideration. The LSDVC estimator, initialized by the system GMM estimator with the full matrix of instruments, is the only one capturing the true speed of convergence within the 95% confidence interval for all scenarios. All other estimators yield point estimates that are substantially different from the true values and confidence intervals that do not include the true value in most scenarios. 
Keywords:  Monte Carlo Simulation,Dynamic Panel Data Estimators,Estimator Bias,Estimator Efficiency,International Income Convergence 
JEL:  C10 C33 O41 O47 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:zbw:tuweco:072017&r=ecm 
By:  RojasPerilla, Natalia; Pannier, Sören; Schmid, Timo; Tzavidis, Nikos 
Abstract:  Small area models typically depend on the validity of model assumptions. For example, a commonly used version of the Empirical Best Predictor relies on the Gaussian assumptions of the error terms of the linear mixed model, a feature rarely observed in applications with real data. The present paper proposes to tackle the potential lack of validity of the model assumptions by using datadriven scaled transformations as opposed to adhoc chosen transformations. Different types of transformations are explored, the estimation of the transformation parameters is studied in detail under a linear mixed model and transformations are used in small area prediction of linear and nonlinear parameters. The use of scaled transformations is crucial as it allows for fitting the linear mixed model with standard software and hence it simplifies the work of the data analyst. Mean squared error estimation that accounts for the uncertainty due to the estimation of the transformation parameters is explored using parametric and semiparametric (wild) bootstrap. The proposed methods are illustrated using real survey and census data for estimating income deprivation parameters for municipalities in the Mexican state of Guerrero. Extensive simulation studies and the results from the application show that using carefully selected, data driven transformations can improve small area estimation. 
Keywords:  small area estimation,linear mixed regression model,MSE estimation,datadriven transformations,poverty mapping,maximum likelihood theory 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:zbw:fubsbe:201730&r=ecm 
By:  Offer Lieberman (BarIlan University); Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  Two approaches have dominated formulations designed to capture small departures from unit root autoregressions. The first involves deterministic departures that include localtounity (LUR) and mildly (or moderately) integrated (MI) specifications where departures shrink to zero as the sample size tends to infinity. The second approach allows for stochastic departures from unity, leading to stochastic unit root (STUR) specifications. This paper introduces a hybrid local stochastic unit root (LSTUR) specification that has both LUR and STUR components and allows for endogeneity in the time varying coefficient that introduces structural elements to the autoregression. This hybrid model generates trajectories that, upon normalization, have nonlinear diffusion limit processes that link closely to models that have been studied in mathematical finance, particularly with respect to option pricing. It is shown that some LSTUR parameterizations have a mean and variance which are the same as a random walk process but with a kurtosis exceeding 3, a feature which is consistent with much financial data. We develop limit theory and asymptotic expansions for the process and document how inference in LUR and STUR autoregressions is affected asymptotically by ignoring one or the other component in the more general hybrid generating mechanism. In particular, we show how confidence belts constructed from the LUR model are affected by the presence of a STUR component in the generating mechanism. The import of these findings for empirical research are explored in an application to the spreads on US investment grade corporate debt. 
Keywords:  Autoregression, Nonlinear diffusion, Stochastic unit roo, Timevarying coefficient 
JEL:  C22 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:3013&r=ecm 
By:  Zacharias Psaradakis (Birkbeck, University of London); Marián Vávra (National Bank of Slovakia) 
Abstract:  The paper considers the problem of testing for normality of the onedimensional marginal distribution of a strictly stationary and weakly dependent stochastic process. The possibility of using an autoregressive sieve bootstrap procedure to obtain critical values and Pvalues for normality tests is explored. The smallsample properties of a variety of tests are investigated in an extensive set of Monte Carlo experiments. The bootstrap version of the classical skewnesskurtosis test is shown to have the best overall performance in small samples. 
Keywords:  Autoregressive sieve bootstrap; Normality test; Weak dependence. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:bbk:bbkefp:1706&r=ecm 
By:  Pierre Chausse (Department of Economics, University of Waterloo) 
Abstract:  In this paper, we explore the finite sample properties of the generalized empirical likelihood for a continuum, applied to a linear model with endogenous regressors and many discrete moment conditions. In particular, we show that the estimator from this regularized version of GEL has finite moments. It therefore solves the issue regarding the no moment problem of empirical likelihood. We propose a data driven method to select the regularization parameter based on a cross validation criterion, and show that the method outperforms many existing methods when the number of instruments exceeds 20. 
JEL:  C13 C30 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:wat:wpaper:1708&r=ecm 
By:  Breunig, Christoph (HumboldtUniversität zu Berlin); Mammen, Enno (Universität Heidelberg); Simoni, Anna (CREST) 
Abstract:  This paper addresses the problem of estimation of a nonparametric regression function from selectively observed data when selection is endogenous. Our approach relies on independence between covariates and selection conditionally on potential outcomes. Endogeneity of regressors is also allowed for. In the exogenous and endogenous case, consistent twostep estimation procedures are proposed and their rates of convergence are derived. Pointwise asymptotic distribution of the estimators is established. In addition, bootstrap uniform confidence bands are obtained. Finite sample properties are illustrated in a Monte Carlo simulation study and an empirical illustration. 
Keywords:  endogenous selection; instrumental variable; sieve minimum distance; regression estimation; inverse problem; inverse probability weighting; convergence rate; asymptotic normality; bootstrap uniform confidence bands; 
JEL:  C14 C26 
Date:  2017–12–20 
URL:  http://d.repec.org/n?u=RePEc:rco:dpaper:58&r=ecm 
By:  Galina Besstremyannaya (CEFIR at New Economic School); Jaak Simm (University of Leuven); Sergei Golovan (New Economic School) 
Abstract:  The paper proposes a bootstrap methodology for robust estimation of cost efficiency in data envelopment analysis. Our algorithm resamples "naive" inputoriented efficiency scores, rescales original inputs to bring them to the frontier, and then reestimates cost efficiency scores for the rescaled inputs. We consider the cases with absence and presence of environmental variables. Simulation analyses with multiinput multioutput production function demonstrate consistency of the new algorithm in terms of the coverage of the confidence intervals for true cost efficiency. Finally, we offer real data estimates for Japanese banking industry. Using the nationwide sample of Japanese banks in 2009, we show that the bias of cost efficiency scores may be linked to the bank charter and the presence of the environmental variables in the model. A package `rDEA', developed in the R language, is available from the GitHub and CRAN repository. 
Keywords:  data envelopment analysis, cost efficiency, bias, bootstrap, banking 
JEL:  C44 C61 G21 
Date:  2017–12 
URL:  http://d.repec.org/n?u=RePEc:cfr:cefirw:w0244&r=ecm 
By:  Xiao Xiao; Chen Zhou 
Abstract:  This paper investigates the maximum entropy method for estimating the option implied volatility, skewness and kurtosis.The maximum entropy method allows for nonparametric estimation of the risk neutral distribution and construction of confidence intervals around the implied volatility. Numerical study shows that the maximum entropy method outperforms the existing methods such as the BlackScholes model and modelfree method when the underlying risk neutral distribution exhibits heavy tail and skewness. By applying this method to the S&P 500 index options, we find that the entropybased implied volatility outperforms the BlackScholes implied volatility and modelfree implied volatility, in terms of insample fit and outofsample predictive power. The differences between entropy based and modelfree implied moments can be explained by the level of the higherorder implied moments of the underlying distribution. 
Keywords:  Option pricing; risk neutral distribution; higher order moments 
JEL:  C14 G13 G17 
Date:  2017–12 
URL:  http://d.repec.org/n?u=RePEc:dnb:dnbwpp:581&r=ecm 
By:  Antoine Lejay (TOSCA); Paolo Pigato (WIAS) 
Abstract:  In financial markets, low prices are generally associated with high volatilities and viceversa, this well known stylized fact usually being referred to as leverage effect. We propose a local volatility model, given by a stochastic differential equation with piecewise constant coefficients, which accounts of leverage and meanreversion effects in the dynamics of the prices. This model exhibits a regime switch in the dynamics accordingly to a certain threshold. It can be seen as a continuous time version of the SelfExciting Threshold Autoregressive (SETAR) model. We propose an estimation procedure for the volatility and drift coefficients as well as for the threshold level. Tests are performed on the daily prices of 21 assets. They show empirical evidence for leverage and meanreversion effects, consistent with the results in the literature. 
Date:  2017–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1712.08329&r=ecm 
By:  Anshul Verma; Riccardo Junior Buonocore; Tiziana di Matteo 
Abstract:  We introduce a new factor model for log volatilities that performs dimensionality reduction and considers contributions globally through the market, and locally through cluster structure and their interactions. We do not assume apriori the number of clusters in the data, instead using the Directed Bubble Hierarchical Tree (DBHT) algorithm to fix the number of factors. We use the factor model and a new integrated non parametric proxy to study how volatilities contribute to volatility clustering. Globally, only the market contributes to the volatility clustering. Locally for some clusters, the cluster itself contributes statistically to volatility clustering. This is significantly advantageous over other factor models, since the factors can be chosen statistically, whilst also keeping economically relevant factors. Finally, we show that the log volatility factor model explains a similar amount of memory to a Principal Components Analysis (PCA) factor model and an exploratory factor model. 
Date:  2017–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1712.02138&r=ecm 
By:  Klein, Ingo 
Abstract:  Jaynes (1957a,b) formulates the maximum entropy (ME) principle as the search for a distribution maximizing a given entropy under some given constraints. Kapur (1984) and Kesavan & Kapur (1989) introduce the generalized maximum entropy principle as the derivation of an entropy for which a given distribution has the maximum entropy property under some given constraints. Both principles will be considered for cumulative entropies. Such entropies depend either on the distribution (direct) or on the survival function (residual) or on both (paired). Maximizing this entropy without any constraint gives a extremely Ushaped (= bipolar) distribution. Under the constraint of fixed mean and variance. maximizing the cumulative entropy tries to transform a distribution in the direction of a bipolar distribution as far as it is allowed by the constraints. A bipolar distribution represents socalled contradictory information in contrast to minimum or no information. Only a few maximum entropy distributions for cumulative entropies have already been derived in the literature. We extend the results to wellknown flexible distributions (like the generalized logistic distribution) and derive some special distributions (like the skewed logistic, the skewed Tukey λ and the extended Burr XII distribution). The generalized maximum entropy principle will be applied to the generalized Tukey λ distribution and the Fechner family of skewed distributions. At last, cumulative entropies will be estimated such that the data was drawn from a ME distribution. This estimator will be applied to the daily S&P500 returns and the time duration between mine explosions. 
Keywords:  cumulative entropy,maximum entropy distribution,generalized Tukey λ distribution,generalized logistic distribution,skewed logistic distribution,skewed Tukey λ distribution,skewed normal distribution,Weibull distribution,extended Burr XII distribution 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:zbw:iwqwdp:252017&r=ecm 
By:  Harald Oberhofer (WIFO); Michael Pfaffermayr (WIFO) 
Abstract:  This paper proposes a new panel data structural gravity approach for estimating the trade and welfare effects of Brexit. The suggested Constrained Poisson Pseudo Maximum Likelihood Estimator exhibits some useful properties for trade policy analysis and allows to obtain estimates and confidence intervals which are consistent with structural trade theory. Assuming different counterfactual postBrexit scenarios, our main findings suggest that UK's exports of goods to the EU are likely to decline within a range between 7.2 percent and 45.7 percent (EU's exports to UK by 5.9 percent to 38.2 percent) six years after the Brexit has taken place. For the UK, the negative trade effects are only partially offset by an increase in domestic goods trade and trade with third countries, inducing a decline in UK's real income between 1.4 percent and 5.7 percent under the hard Brexit scenario. The estimated welfare effects for the EU are negligible in magnitude and statistically not different from zero. 
Keywords:  Constrained Poisson Pseudo Maximum Likelihood Estimation, Panel Data, International Trade, Structural Gravity Estimation, Trade Policy, Brexit 
Date:  2017–12–19 
URL:  http://d.repec.org/n?u=RePEc:wfo:wpaper:y:2017:i:546&r=ecm 
By:  Massimo Franchi ("Sapienza" University of Rome); Paolo Paruolo (European Commission, Joint Research Centre) 
Abstract:  This paper derives a generalization of the GrangerJohansen Representation Theorem valid for Hvalued autoregressive (AR) processes, where H is an infinite dimensional separable Hilbert space, under the assumption that 1 is an eigenvalue of finite type of the AR operator function and that no other nonzero eigenvalue lies within or on the unit circle. A necessary and sucient condition for integration of order d = 1, 2,... is given in terms of the decomposition of the space H into the direct sum of d+1 closed subspaces h, h = ,..,d, each one associated with components of the process integrated of order h. These results mirror the ones recently obtained in the nite dimensional case, with the only di erence that the number of cointegrating relations of order 0 is infinite. 
Keywords:  Functional autoregressive process, Unit roots, Cointegration, Common Trends, GrangerJohansen Representation Theorem. 
JEL:  C12 C33 
Date:  2017–12 
URL:  http://d.repec.org/n?u=RePEc:sas:wpaper:20175&r=ecm 
By:  Leopoldo Catania (DEF, University of Rome "Tor Vergata"); Stefano Grassi (DEF, University of Rome "Tor Vergata") 
Abstract:  This paper studies the behaviour of crypto{currencies financial time{series of which Bitcoin is the most prominent example. The dynamic of those series is quite complex displaying extreme observations, asymmetries and several nonlinear characteristics which are difficult to model. We develop a new dynamic model able to account for long{memory and asymmetries in the volatility process as well as for the presence of time{varying skewness and kurtosis. The empirical application, carried out on a large set of crypto{currencies, shows evidence of long memory and leverage effect that has a substantial contribution in the volatility dynamic. Going forward, as this new and unexplored market will develop, our results will be important for investment and risk management purposes. 
Keywords:  Cryptocurrency; Bitcoin, Score{Driven model; Leverage effect; Long memory; Higher Order Moments 
JEL:  C01 C22 C51 C58 
Date:  2017–12–11 
URL:  http://d.repec.org/n?u=RePEc:rtv:ceisrp:417&r=ecm 
By:  Hoechle, Daniel; Schmid, Markus; Zimmermann, Heinz 
Abstract:  In empirical asset pricing, it is standard to sort assets into portfolios based on a characteristic, and then compare the top (e.g., decile) portfolio's riskadjusted return with that of the bottom portfolio. We show that such an analysis assumes the random effects assumption to hold. Therefore, results from portfolio sorts are valid if and only if firmspecific effects are uncorrelated with the characteristic underlying the portfolio sort. We propose a novel, regressionbased approach to analyzing asset returns. Relying on standard econometrics, our technique handles multiple dimensions and continuous firm characteristics. Moreover, it nests all variants of sorting assets into portfolios as a special case, provides a means for testing the random effects assumption, and allows for the inclusion of firmfixed effects in the analysis. Our empirical results demonstrate that the random effects assumption underlying portfolio sorts is often violated, and that certain characteristicsbased factors that are wellknown from empirical asset pricing studies do not withstand tests accounting for firm fixed effects. 
Keywords:  Portfolio sorts, Random effects assumption, Crosssection of expected returns, FamaFrench threefactor model 
JEL:  C21 G14 D1 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:usg:sfwpfi:2017:17&r=ecm 
By:  Rajae R. Azrak; Guy Melard 
Keywords:  Nonstationary process; time series; time dependent model; time varying model; locally statiionary processes 
Date:  2017–12 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/262612&r=ecm 
By:  Diewert, W, Erwin; Feenstra, Robert 
Abstract:  A major challenge facing statistical agencies is the problem of adjusting price and quantity indexes for changes in the availability of commodities. This problem arises in the scanner data context as products in a commodity stratum appear and disappear in retail outlets. Hicks suggested a reservation price methodology for dealing with this problem in the context of the economic approach to index number theory. Feenstra and Hausman suggested specific methods for implementing the Hicksian approach. The present paper evaluates these approaches and suggests some alternative approaches to the estimation of reservation prices. The various approaches are implemented using some scanner data on frozen juice products that are available online. 
Keywords:  Hicksian reservation prices, virtual prices, Laspeyres, Paasche, Fisher, Törnqvist and SatoVartia price indexes 
JEL:  C33 C43 C81 D11 D60 E31 
Date:  2017–12–19 
URL:  http://d.repec.org/n?u=RePEc:ubc:pmicro:tina_marandola201712&r=ecm 