|
on Econometrics |
By: | COUDIN, Élise; DUFOUR, Jean-Marie |
Abstract: | We study the problem of estimating the parameters of a linear median regression without any assumption on the shape of the error distribution – including no condition on the existence of moments – allowing for heterogeneity (or heteroskedasticity) of unknown form, noncontinuous distributions, and very general serial dependence (linear and nonlinear). This is done through a reverse inference approach, based on a distribution-free testing theory [Coudin and Dufour (2009, The Econometrics Journal)], from which confidence sets and point estimators are subsequently generated. The estimation problem is tackled in two complementary ways. First, we show how confidence distributions for model parameters can be applied in such a context. Such distributions – which can be interpreted as a form of fiducial inference – provide a frequency-based method for associating probabilities with subsets of the parameter space (like posterior distributions do in a Bayesian setup) without the introduction of prior distributions. We consider generalized confidence distributions applicable to multidimensional parameters, and we suggest the use of a projection technique for confidence inference on individual model parameters. Second, we propose point estimators, which have a natural association with confidence distributions. These estimators are based on maximizing test p-values and inherit robustness properties from the generating distribution-free tests. Both finite-sample and large-sample properties of the proposed estimators are established under weak regularity conditions. We show they are median unbiased (under symmetry and estimator unicity) and possess equivariance properties. Consistency and asymptotic normality are established without any moment existence assumption on the errors, allowing for noncontinuous distributions, heterogeneity and serial dependence of unknown form. These conditions are considerably weaker than those used to show corresponding results for LAD estimators. In a Monte Carlo study of bias and RMSE, we show sign-based estimators perform better than LAD-type estimators in heteroskedastic settings. We present two empirical applications, which involve financial and macroeconomic data, both affected by heavy tails (non-normality) and heteroskedasticity: a trend model for the S&P index, and an equation used to study β-convergence of ouput levels across U.S. States. |
Keywords: | sign-based methods, median regression, test inversion, Hodges-Lehmann estimators, confidence distributions, p-value function, least absolute deviation estimators, quantile regressions, sign test, simultaneous inference, Monte Carlo tests, projection methods, non-normality, heteroskedasticity, serial dependence, GARCH, stochastic volatility |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:mtl:montec:01-2017&r=ecm |
By: | Stammann, Amrei; Heiß, Florian; McFadden, Daniel |
Abstract: | For the parametric estimation of logit models with individual time-invariant effects the conditional and unconditional fixed effects maximum likelihood estimators exist. The conditional fixed effects logit (CL) estimator is consistent but it has the drawback that it does not deliver estimates of the fixed effects or marginal effects. It is also computationally costly if the number of observations per individual T is large. The unconditional fixed effects logit estimator (UCL) can be estimated by including a dummy variable for each individual (DVL). It suffers from the incidental parameters problem which causes severe biases for small T. Another problem is that with a large number of individuals N, the computational costs of the DVL estimator can be prohibitive. We suggest a pseudo-demeaning algorithm in spirit of Greene (2004) and Chamberlain (1980) that delivers the identical results as the DVL estimator without its computational burden for large N. We also discuss how to correct for the incidental parameters bias of parameters and marginal effects. Monte-Carlo evidence suggests that the bias-corrected estimator has similar properties as the CL estimator in terms of parameter estimation. Its computational burden is much lower than the CL or the DVL estimators, especially with large N and/or T. |
JEL: | C01 C13 C80 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:zbw:vfsc16:145837&r=ecm |
By: | Choe, Chung (Hanyang University); Jung, Seeun (Inha University); Oaxaca, Ronald L. (University of Arizona) |
Abstract: | Probit and logit models typically require a normalization on the error variance for model identification. This paper shows that in the context of sample mean probability decompositions, error variance normalizations preclude estimation of the effects of group differences in the latent variable model parameters. An empirical example is provided for a model in which the error variances are identified. This identification allows the effects of group differences in the latent variable model parameters to be estimated. |
Keywords: | decompositions, probit, logit, identification |
JEL: | C35 J16 D81 J71 |
Date: | 2017–01 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp10530&r=ecm |
By: | Lorenzo Ricci |
Abstract: | This thesis is composed of three chapters that propose some novel approaches on tail risk for financial market and forecasting in finance and macroeconomics. The first part of this dissertation focuses on financial market correlations and introduces a simple measure of tail correlation, TailCoR, while the second contribution addresses the issue of identification of non- normal structural shocks in Vector Autoregression which is common on finance. The third part belongs to the vast literature on predictions of economic growth; the problem is tackled using a Bayesian Dynamic Factor model to predict Norwegian GDP.Chapter I: TailCoRThe first chapter introduces a simple measure of tail correlation, TailCoR, which disentangles linear and non linear correlation. The aim is to capture all features of financial market co- movement when extreme events (i.e. financial crises) occur. Indeed, tail correlations may arise because asset prices are either linearly correlated (i.e. the Pearson correlations are different from zero) or non-linearly correlated, meaning that asset prices are dependent at the tail of the distribution.Since it is based on quantiles, TailCoR has three main advantages: i) it is not based on asymptotic arguments, ii) it is very general as it applies with no specific distributional assumption, and iii) it is simple to use. We show that TailCoR also disentangles easily between linear and non-linear correlations. The measure has been successfully tested on simulated data. Several extensions, useful for practitioners, are presented like downside and upside tail correlations.In our empirical analysis, we apply this measure to eight major US banks for the period 2003-2012. For comparison purposes, we compute the upper and lower exceedance correlations and the parametric and non-parametric tail dependence coefficients. On the overall sample, results show that both the linear and non-linear contributions are relevant. The results suggest that co-movement increases during the financial crisis because of both the linear and non- linear correlations. Furthermore, the increase of TailCoR at the end of 2012 is mostly driven by the non-linearity, reflecting the risks of tail events and their spillovers associated with the European sovereign debt crisis. Chapter II: On the identification of non-normal shocks in structural VARThe second chapter deals with the structural interpretation of the VAR using the statistical properties of the innovation terms. In general, financial markets are characterized by non- normal shocks. Under non-Gaussianity, we introduce a methodology based on the reduction of tail dependency to identify the non-normal structural shocks.Borrowing from statistics, the methodology can be summarized in two main steps: i) decor- relate the estimated residuals and ii) the uncorrelated residuals are rotated in order to get a vector of independent shocks using a tail dependency matrix. We do not label the shocks a priori, but post-estimate on the basis of economic judgement.Furthermore, we show how our approach allows to identify all the shocks using a Monte Carlo study. In some cases, the method can turn out to be more significant when the amount of tail events are relevant. Therefore, the frequency of the series and the degree of non-normality are relevant to achieve accurate identification.Finally, we apply our method to two different VAR, all estimated on US data: i) a monthly trivariate model which studies the effects of oil market shocks, and finally ii) a VAR that focuses on the interaction between monetary policy and the stock market. In the first case, we validate the results obtained in the economic literature. In the second case, we cannot confirm the validity of an identification scheme based on combination of short and long run restrictions which is used in part of the empirical literature.Chapter III :Nowcasting NorwayThe third chapter consists in predictions of Norwegian Mainland GDP. Policy institutions have to decide to set their policies without knowledge of the current economic conditions. We estimate a Bayesian dynamic factor model (BDFM) on a panel of macroeconomic variables (all followed by market operators) from 1990 until 2011.First, the BDFM is an extension to the Bayesian framework of the dynamic factor model (DFM). The difference is that, compared with a DFM, there is more dynamics in the BDFM introduced in order to accommodate the dynamic heterogeneity of different variables. How- ever, in order to introduce more dynamics, the BDFM requires to estimate a large number of parameters, which can easily lead to volatile predictions due to estimation uncertainty. This is why the model is estimated with Bayesian methods, which, by shrinking the factor model toward a simple naive prior model, are able to limit estimation uncertainty.The second aspect is the use of a small dataset. A common feature of the literature on DFM is the use of large datasets. However, there is a literature that has shown how, for the purpose of forecasting, DFMs can be estimated on a small number of appropriately selected variables.Finally, through a pseudo real-time exercise, we show that the BDFM performs well both in terms of point forecast, and in terms of density forecasts. Results indicate that our model outperforms standard univariate benchmark models, that it performs as well as the Bloomberg Survey, and that it outperforms the predictions published by the Norges Bank in its monetary policy report. |
Keywords: | Tail correlation, tail risk, quantile, ellipticity, crises. JEL classification: C32, C51, G01.; Identification, Independent Component Analysis, Impulse Response Function, Vector Autoregression.; Real-Time Forecasting, Bayesian Factor model, Nowcasting. JEL classification: C32, C53, E37. |
Date: | 2017–02–13 |
URL: | http://d.repec.org/n?u=RePEc:ulb:ulbeco:2013/242122&r=ecm |
By: | de Lazzer, Jakob |
Abstract: | The Regression Discontinuity Design (RDD) has become a popular method for program evaluation in recent years. While it is compelling in its simplicity and requires little in terms of a priori assumptions, it is vulnerable to bias introduced by self-selection into treatment or control group. The purpose of this article is to discuss the issue of non-monotonic self-selection, by which similar numbers of individuals select into and out of treatment simultaneously. This kind of selection has not been discussed in detail so far in the literature, and can be hard to detect with the commonly used methods for data-driven RDD specification testing. The focus of this article lies on selection in the context of close elections, since those are popular natural experiments for RDD applications, and because in this context the issue of non-monotonic selection is rarely considered in practise. I will present a slightly modified approach to specification testing, designed to detect non-monotonic self selection and based on the density test by McCrary (2008). In order to demonstrate how RDDs can be affected by the issue, two existing RDD applications are analysed with respect to non-monotonic sorting. In the first, this article follows up and expands on the remarks made by Caughey & Sekhon (2011) about selection issues in the well known RDD application by D. Lee (2008). The second application is based on the Mexican mayoral election RDD by Dell (2015). |
JEL: | C29 C52 H89 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:zbw:vfsc16:145845&r=ecm |
By: | Chan Shen (University of Texas MD Anderson Cancer Center); Roger Klein (Rutgers University) |
Abstract: | It is well known that it is important to control the bias in estimating conditional expectations in order to obtain asymptotic normality for quantities of interest (e.g. a finite dimensional parameter vector in semiparametric models or averages of marginal effects in the nonparametric case). For this purposes, higher order kernel methods are often employed in developing the theory. However such methods typically do not perform well at moderate sample sizes. Moreover, and perhaps related to their performance, non-optimal windows are selected with undersmoothing needed to ensure the appropriate bias order. We propose a recursive differencing approach to bias reduction for a nonparametric estimator of a conditional expectation, where the order of the bias depending on the stage of the recursion. It performs much better at moderate sample sizes than regular or higher order kernels while retaining a bias of any desired order and a convergence rate the same as that of higher order kernels. We also propose an approach to implement this estimator under optimal windows, which ensures asymptotic normality in semiparametric multiple index models of arbitrary dimension. This mechanism further contributes to its very good finite sample performance. |
Keywords: | Bias Reduction, Nonparametric Expectations, Semiparametric Models |
JEL: | C14 |
Date: | 2017–02–15 |
URL: | http://d.repec.org/n?u=RePEc:rut:rutres:201701&r=ecm |
By: | Bonaccolto, Giovanni; Caporin, Massimiliano; Panzica, Roberto Calogero |
Abstract: | Causality is a widely-used concept in theoretical and empirical economics. The recent financial economics literature has used Granger causality to detect the presence of contemporaneous links between financial institutions and, in turn, to obtain a network structure. Subsequent studies combined the estimated networks with traditional pricing or risk measurement models to improve their fit to empirical data. In this paper, we provide two contributions: we show how to use a linear factor model as a device for estimating a combination of several networks that monitor the links across variables from different viewpoints; and we demonstrate that Granger causality should be combined with quantile-based causality when the focus is on risk propagation. The empirical evidence supports the latter claim. |
Keywords: | granger causality,quantile causality,multi-layer network,network combination |
JEL: | C58 C31 C32 G01 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:zbw:safewp:165&r=ecm |
By: | Velinov, Anton |
Abstract: | Identification schemes are of essential importance in structural analysis. This paper focuses on testing a commonly used long-run structural parameter identification scheme claiming to identify fundamental and non-fundamental shocks to stock prices. Five related widely used structural models on assessing stock price determinants are considered. All models are either specified in vector error correction (VEC) or in vector autoregressive (VAR) form. A Markov switching in heteroskedasticity model is used to test the identifying restrictions. It is found that for two of the models considered, the long-run identification scheme appropriately classifies shocks as being either fundamental or non-fundamental. A small empirical exercise finds that the models with properly identified structural shocks deliver realistic conclusions, similar as in some of the literature. On the other hand, models with identification schemes not supported by the data yield dubious conclusions on the importance of fundamentals for real stock prices. This is because their structural shocks are not properly identified, making any shock labelling ambiguous. Hence, in order to ensure that economic shocks of interest are properly captured, it is important to test the structural identification scheme. |
JEL: | C32 C34 G12 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:zbw:vfsc16:145581&r=ecm |
By: | Takaaki Koike; Mihoko Minami |
Abstract: | Determining risk contributions by unit exposures to portfolio-wide economic capital is an important task in financial risk management. Despite its practical demands, computation of risk contributions is challenging for most risk models because it often requires rare-event simulation. In this paper, we address the problem of estimating risk contributions when the total risk is measured by Value-at-Risk (VaR). We propose a new estimator of VaR contributions, that utilizes Markov chain Monte Carlo (MCMC) method. Unlike the existing estimators, our MCMC-based estimator is computed by samples of conditional loss distribution given the rare event of our interest. MCMC method enables to generate such samples without evaluating the density of total risk. Thanks to these features, our estimator has improved sample-efficiency compared with the crude Monte Carlo method. Moreover, our method is widely applicable to various risk models specified by joint portfolio loss density. In this paper, we show that our MCMC-based estimator has several attractive properties, such as consistency and asymptotic normality. Our numerical experiment also demonstrates that, in various risk models used in practice, our MCMC estimator has smaller bias and MSE compared with these of existing estimators. |
Date: | 2017–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1702.03098&r=ecm |
By: | Adrian Pagan |
Abstract: | In a number of time times models there are I(1) variables that appear in data sets in differenced from. This note shows that an emerging practice of assuming that observed data relates to model variables through the use of “measurement error shocks” when estimating these models can imply that there is a lack of co-integration between model and data variables, and also between data variables themselves. An analysis is provided of what the nature of the measurement error would need to be if it was desired to reproduce the same co-integration information as seen in the data. Sometimes this adjustment can be complex. It is very unlikely that measurement error can be described properly with the white noise shocks that are commonly used for measurement error. |
Date: | 2017–02 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2017-12&r=ecm |
By: | Carsten Chong; Claudia Kl\"uppelberg |
Abstract: | We develop a structural default model for interconnected financial institutions in a probabilistic framework. For all possible network structures we characterize the joint default distribution of the system using Bayesian network methodologies. Particular emphasis is given to the treatment and consequences of cyclic financial linkages. We further demonstrate how Bayesian network theory can be applied to detect contagion channels within the financial network, to measure the systemic importance of selected entities on others, and to compute conditional or unconditional probabilities of default for single or multiple institutions. |
Date: | 2017–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1702.04287&r=ecm |
By: | Das, Tirthatanmoy (University of Central Florida); Polachek, Solomon (Binghamton University, New York) |
Abstract: | We derive a non-standard unit root serial correlation formulation for intertemporal adjustments in the labor force participation rate. This leads to a tractable three-error component model, which in contrast to other models embeds heterogeneity into the error structure. Unlike in the typical iid three-error component two-tier stochastic frontier model, our equation's error components are independent but not identically distributed. This leads to a complex nonlinear likelihood function requiring identification through a two-step estimation procedure, which we estimate using Current Population Survey (CPS) data. By transforming the basic equation linking labor force participation to the working age population, this paper devises a new method which can be used to identify labor market joiners and leavers. The method's advantage is its parsimonious data requirements, especially alleviating the need for survey based longitudinal data. |
Keywords: | two-tier stochastic frontier, identification, labor force dynamics |
JEL: | C23 C51 J21 |
Date: | 2017–01 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp10534&r=ecm |
By: | Angrist, Joshua (MIT); Pischke, Jörn-Steffen (London School of Economics) |
Abstract: | The past half‐century has seen economic research become increasingly empirical, while the nature of empirical economic research has also changed. In the 1960s and 1970s, an empirical economist's typical mission was to "explain" economic variables like wages or GDP growth. Applied econometrics has since evolved to prioritize the estimation of specific causal effects and empirical policy analysis over general models of outcome determination. Yet econometric instruction remains mostly abstract, focusing on the search for "true models" and technical concerns associated with classical regression assumptions. Questions of research design and causality still take a back seat in the classroom, in spite of having risen to the top of the modern empirical agenda. This essay traces the divergent development of econometric teaching and empirical practice, arguing for a pedagogical paradigm shift. |
Keywords: | econometrics, teaching |
JEL: | A22 |
Date: | 2017–01 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp10535&r=ecm |
By: | Bartalotti, Otávio C.; Brummet, Quentin O. |
Abstract: | Regression Discontinuity designs have become popular in empirical studies due to their attractive properties for estimating causal effects under transparent assumptions. Nonetheless, most popular procedures assume i.i.d. data, which is unreasonable in many common applications. To fill this gap, we derive the properties of traditional local polynomial estimators in a fixed-G setting that allows for cluster dependence in the error term. Simulation results demonstrate that accounting for clustering in the data while selecting bandwidths may lead to lower MSE while maintaining proper coverage. We then apply our cluster-robust procedure to an application examining the impact of Low-Income Housing Tax Credits on neighborhood characteristics and low-income housing supply. |
Date: | 2016–08–01 |
URL: | http://d.repec.org/n?u=RePEc:isu:genstf:201608010700001001&r=ecm |
By: | Knapik, Oskar; Exterkate, Peter |
Abstract: | In a recent review paper, Weron (2014) pinpoints several crucial challenges outstanding in the area of electricity price forecasting. This research attempts to address all of them by i) showing the importance of considering fundamental price drivers in modeling, ii) developing new techniques for probabilistic (i.e. interval or density) forecasting of electricity prices, iii) introducing an universal technique for model comparison. We propose new regime-switching stochastic volatility model with three regimes (negative jump, normal price, positive jump (spike)) where the transition matrix depends on explanatory variables. Bayesian inference is explored in order to obtain predictive densities. The main focus of the paper is on short-time density forecasting in Nord Pool intraday market. We show that the proposed model outperforms several benchmark models at this task. |
Keywords: | Electricity prices, density forecasting, Markov switching, stochastic volatility, fundamental price drivers, ordered probit model, Bayesian inference, seasonality, Nord Pool power market, electricity prices forecasting, probabilistic forecasting |
Date: | 2017–02 |
URL: | http://d.repec.org/n?u=RePEc:syd:wpaper:2017-02&r=ecm |
By: | Takahiro Omi; Yoshito Hirata; Kazuyuki Aihara |
Abstract: | A Hawkes process model with a time-varying background rate is developed for analyzing the high-frequency financial data. In our model, the logarithm of the background rate is modeled by a linear model with variable-width basis functions, and the parameters are estimated by a Bayesian method. We find that the data are explained significantly better by our model as compared to the Hawkes model with a stationary background rate, which is commonly used in the field of quantitative finance. Our model can capture not only the slow time-variation, such as in the intraday seasonality, but also the rapid one, which follows a macroeconomic news announcement. We also demonstrate that the level of the endogeneity of markets, quantified by the branching ratio of the Hawkes process, is overestimated if the time-variation is not considered. |
Date: | 2017–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1702.04443&r=ecm |
By: | Minford, Patrick; Wickens, Michael R.; Xu, Yongdeng |
Abstract: | We propose a new type of test. Its aim is to test subsets of the structural equations of a DSGE model. The test draws on the statistical inference for limited information models and the use of indirect inference to test DSGE models. Using Monte Carlo experiments on two subsets of equations of the Smets-Wouters model we show that the model has accurate size and good power in small samples. In a test of the Smets-Wouters model on US Great Moderation data we reject the specification of the wage-price but not the expenditure sector, pointing to the first as the source of overall model rejection. |
Keywords: | indirect inference; limited information; Monte Carlo; Power; sub sectors of models; test size; testing DSGE models equations |
Date: | 2017–01 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:11819&r=ecm |
By: | Helmut Lütkepohl; Anna Staszewska-Bystrova; Peter Winker |
Abstract: | There is evidence that estimates of long-run impulse responses of structural vector autoregressive (VAR) models based on long-run identifying restrictions may not be very accurate. This finding suggests that using short-run identifying restrictions may be preferable. We compare structural VAR impulse response estimates based on long-run and short-run identifying restrictions and find that long-run identifying restrictions can result in much more precise estimates for the structural impulse responses than restrictions on the impact effects of the shocks. |
Keywords: | Impulse responses, structural vector autoregressive model, longrun multipliers, short-run multipliers |
JEL: | C32 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1642&r=ecm |