|
on Econometrics |
By: | Robert F. Phillips (The George Washington University) |
Abstract: | This paper establishes the almost sure convergence and asymptotic normality of quasi maximum-likelihooD (QML) estimators of a dynamic panel data model when the time series for each cross section is short. The QML estimators are robust with respect to initial conditions and misspecication of the log-likelihood, and results are provided for a general specication of the error variance-covariance matrix. The paper also provides procedures for computing QML estimates that improve on computational methods previously recommended in the literature. Moreover, it compares the nite sample performance of several QML estimators, the differenced GMM estimator, and the system GMM estimator. |
Keywords: | random effects; fixed effects; differenced QML; augmented dynamic panel data model |
JEL: | C23 |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:gwc:wpaper:2014-006&r=ecm |
By: | Anastasia Semykina (Department of Economics, Florida State University); Jeffrey M. Wooldridge (Department of Economics, Michigan State University) |
Abstract: | We consider estimating binary response models on an unbalanced panel, where the outcome of the dependent variable may be missing due to non-random selection, or there is self selection into a treatment. In the present paper, we first consider estimation of sample selection models and treatment effects using a fully parametric approach, where the error distribution is assumed to be normal in both primary and selection equations. Arbitrary time dependence in errors is permitted. Estimation of both coefficients and partial effects, as well as tests for selection bias are discussed. Furthermore, we consider a semiparametric estimator of binary response panel data models with sample selection that is robust to a variety of error distributions. The estimator employs a control function approach to account for endogenous selection and permits consistent estimation of scaled coefficients and relative effects. |
Keywords: | Binary response models, Sample selection, Panel data, Semiparametric, Treament effect |
JEL: | C33 C34 C35 C14 |
Date: | 2015–02 |
URL: | http://d.repec.org/n?u=RePEc:fsu:wpaper:wp2015_05_01&r=ecm |
By: | Peter Reinhard Hansen (European University Institute and CREATES); Guillaume Horel (Serenitas Credit L.p.); Asger Lunde (Aarhus University and CREATES); Ilya Archakov (European University Institute) |
Abstract: | We introduce a multivariate estimator of financial volatility that is based on the theory of Markov chains. The Markov chain framework takes advantage of the discreteness of high-frequency returns. We study the finite sample properties of the estimation in a simulation study and apply it to highfrequency commodity prices. |
Keywords: | Markov chain, Multivariate Volatility, Quadratic Variation, Integrated Variance, Realized Variance, High Frequency Data |
JEL: | C10 C22 C80 |
Date: | 2015–03–30 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2015-19&r=ecm |
By: | Takuya Hasebe (Faculty of Liberal Arts, Sophia University) |
Abstract: | We derive the asymptotic variance of the Blinder-Oaxaca decomposition effects. We show that the delta method approach that builds on the assumption of fixed regressors understates true variability of the decomposition effects when regressors are stochastic. Our proposed variance estimator takes randomness of regressors into consideration. Our approach is applicable to both the linear and nonlinear decompositions, for the latter of which only a bootstrap method is an option. As our derivation follows the general framework of m-estimation, it is straightforward to extend to the cluster-robust variance estimator. We demonstrate the finite-sample performance of our variance estimator with a Monte Carlo study and present a real-data application. |
Keywords: | Decomposition Analysis, Asymptotic Variance, m-estimation |
JEL: | C10 J70 |
Date: | 2015–04–15 |
URL: | http://d.repec.org/n?u=RePEc:cgc:wpaper:006&r=ecm |
By: | Patrick Bajari; Victor Chernozhukov; Han Hong; Denis Nekipelov |
Abstract: | In this paper, we study the identification and estimation of a dynamic discrete game allowing for discrete or continuous state variables. We first provide a general nonparametric identification result under the imposition of an exclusion restriction on agent payoffs. Next we analyze large sample statistical properties of nonparametric and semiparametric estimators for the econometric dynamic game model. We also show how to achieve semiparametric efficiency of dynamic discrete choice models using a sieve based conditional moment framework. Numerical simulations are used to demonstrate the finite sample properties of the dynamic game estimators. An empirical application to the dynamic demand of the potato chip market shows that this technique can provide a useful tool to distinguish long term demand from short term demand by heterogeneous consumers. |
JEL: | C01 C14 C7 C73 L0 |
Date: | 2015–04 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:21125&r=ecm |
By: | Steven E. Pav |
Abstract: | The upsilon distribution, the sum of independent chi random variates and a normal, is introduced. As a special case, the upsilon distribution includes Lecoutre's lambda-prime distribution. The upsilon distribution finds application in Frequentist inference on the Sharpe ratio, including hypothesis tests on independent samples, confidence intervals, and prediction intervals, as well as their Bayesian counterparts. These tests are extended to the case of factor models of returns. |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1505.00829&r=ecm |
By: | Ignace De Vos; Gerdie Everaert; Ilse Ruyssen (-) |
Abstract: | This article describes a new Stata routine, xtbcfe, that performs the iterative bootstrap-based bias correction for the fixed effects (FE) estimator in dynamic panels proposed by Everaert and Pozzi (Journal of Economic Dynamics and Control, 2007). We first simplify the core of their algorithm using the invariance principle and subsequently extend it to allow for unbalanced and higher order dynamic panels. We implement various bootstrap error resampling schemes to account for general heteroscedasticity and contemporaneous cross-sectional dependence. Inference can be performed using a bootstrapped variance-covariance matrix or percentile intervals. Monte Carlo simulations show that the simplification of the original algorithm results in a further bias reduction for very small T. The Monte Carlo results also support the bootstrap-based bias correction in higher order dynamic panels and panels with cross-sectional dependence. We illustrate the routine with an empirical example estimating a dynamic labour demand function. |
Keywords: | st0001, xtbcfe, bootstrap-based bias correction, dynamic panel data, unbalanced, higher order, heteroscedasticity, cross-sectional dependence, Monte Carlo, labour demand |
Date: | 2015–04 |
URL: | http://d.repec.org/n?u=RePEc:rug:rugwps:15/906&r=ecm |
By: | Goodwin, Roger L |
Abstract: | This book is a practical reference guide accompanied with an Excel Workbook. This book gives an elementary introduction of the weighted standard deviational ellipse. This book also presents the computational aspects of the weighted exponential distributions as well. For the examples given, calculations are performed using VBA for Excel. This book makes comparisons (and shows the computations via VBA for Excel) using the likelihood functions with spatial data of the weighted ellipses. Lastly, the book covers spherical statistics. Throughout the text, the reader can see how to perform these difficult calculations and learn to adapt the code for his research. |
Keywords: | spatial, remote sensing, longitudinal data, statistics, weibull distribution, exponential distribution, weighted data, map point, google earth, weighted regression, ellipse, area, random variables, spherical statistics, VBA for Excel, mean center, eccentricity, axis length, rotation, weighted mean center, mean latitude, mean longitude, distribution fitting, spherical variance, eigenvalue, eigenvector |
JEL: | C46 |
Date: | 2014–12–06 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:64101&r=ecm |
By: | Yuki Kawakubo (Graduate School of Economics, The University of Tokyo); Tatsuya Kubokawa (Faculty of Economics, The University of Tokyo); Muni S. Srivastava (Department of Statistics, University of Toronto) |
Abstract: | We propose an information criterion which measures the prediction risk of the predictive density based on the Bayesian marginal likelihood from a frequentist point of view. We derive the criteria for selecting variables in linear regression models by putting the prior on the regression coefficients, and discuss the relationship between the proposed criteria and other related ones. There are three advantages of our method. Firstly, this is a compromise between the frequentist and Bayesian standpoint because it evaluates the frequentist's risk of the Bayesian model. Thus it is less in uenced by prior misspecication. Secondly, non-informative improper prior can be also used for constructing the criterion. When the uniform prior is assumed on the regression coefficients, the resulting criterion is identical to the residual information criterion (RIC) of Shi and Tsai (2002). Lastly, the criteria have the consistency property for selecting the true model. -- |
Date: | 2015–04 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2015cf971&r=ecm |
By: | Dinda, Soumyananda |
Abstract: | This paper attempts to assess the impact of treatment effect or programme applying difference in difference (DD) approach. This study also identifies that the DD estimators are biased under certain conditions. |
Keywords: | Difference-in-Difference, Treatment group, Control group, DD Estimator |
JEL: | C1 C18 C52 H4 H43 O1 O4 |
Date: | 2015–04–27 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:63949&r=ecm |
By: | Maria Guadarrama Sanz; Isabel Molina Peralta; J. N. K. Rao |
Abstract: | Poverty maps are an important source of information on the regional distribution of poverty and are currently used to support regional policy making and to allocate funds to local jurisdictions. But obtaining accurate poverty maps at low levels of disaggregation is not straightforward because of insufficient sample size of official surveys in some of the target regions. Direct estimates, obtained with the region-specific sample data, are unstable in the sense of having very large sampling errors for regions with small sample size. Very unstable poverty estimates might make the seemingly poorer regions in one period appear as the richer in the next period, which can be inconsistent. On the other hand, very stable but biased estimates (e.g., too homogeneous across regions) might make identification of the poorer regions difficult. Here we review the main small area estimation methods for poverty mapping. In particular, we consider direct estimation, the Fay-Herriot area level model, the method of Elbers, Lanjouw and Lanjouw (2003) used by the World Bank, the empirical Best/Bayes (EB) method of Molina and Rao (2010) and its extension, the Census EB, and finally the hierarchical Bayes proposal of Molina, Nandram and Rao (2014). We put ourselves in the point of view of a practitioner and discuss, as objectively as possible, the benefits and drawbacks of each method, illustrating some of them through simulation studies. |
Keywords: | Area level model , Unit level models , Empirical best estimator , Hierarchical Bayes , Poverty mapping |
Date: | 2015–04 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:ws1505&r=ecm |
By: | Comola, Margherita; Fafchamps, Marcel |
Abstract: | Many studies have used self-reported dyadic data without exploiting the pattern of discordant answers. In this paper we propose a maximum likelihood estimator that deals with mis-reporting in a systematic way. We illustrate the methodology using dyadic data on inter-household transfers from the village of Nyakatoke in Tanzania, investigating the role of wealth in link formation. Our results suggest that observed transfers are grounded in mutual self-interest, and we show that not taking reporting bias into account leads to incorrect inference and serious underestimation of the total amount of transfers between villagers. The method introduced here is applicable whenever the researcher has two discordant measurements of the same dependent variable. |
Keywords: | dyadic data; informal transfer; reporting bias; social networks |
JEL: | C13 C51 D85 |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:10575&r=ecm |
By: | Korobilis, Dimitris |
Abstract: | There is a vast literature that specifies Bayesian shrinkage priors for vector autoregressions (VARs) of possibly large dimensions. In this paper I argue that many of these priors are not appropriate for multi-country settings, which motivates me to develop priors for panel VARs (PVARs). The parametric and semi-parametric priors I suggest not only perform valuable shrinkage in large dimensions, but also allow for soft clustering of variables or countries which are homogeneous. I discuss the implications of these new priors for modelling interdependencies and heterogeneities among different countries in a panel VAR setting. Monte Carlo evidence and an empirical forecasting exercise show clear and important gains of the new priors compared to existing popular priors for VARs and PVARs. |
Keywords: | Bayesian model selection; shrinkage; spike and slab priors; forecasting; large vector autoregression |
JEL: | C11 C32 C33 C52 |
Date: | 2015–04 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:64143&r=ecm |
By: | G. Papadopoulos; D. Kugiumtzis |
Abstract: | A new method is proposed to compute connectivity measures on multivariate time series with gaps. Rather than removing or filling the gaps, the rows of the joint data matrix containing empty entries are removed and the calculations are done on the remainder matrix. The method, called measure adapted gap removal (MAGR), can be applied to any connectivity measure that uses a joint data matrix, such as cross correlation, cross mutual information and transfer entropy. MAGR is favorably compared using these three measures to a number of known gap-filling techniques, as well as the gap closure. The superiority of MAGR is illustrated on time series from synthetic systems and financial time series. |
Date: | 2015–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1505.00003&r=ecm |
By: | Nicola Mingotti; Rosa E. Lillo; Juan Romo |
Abstract: | In this paper we introduce a Random Walk test for Functional Autoregressive Processes of Order One. The test is non parametric, based on Bootstrap and Functional Principal Components. The power of the test is shown through an extensive Montecarlo simulation. We apply the test to two real dataset, Bitcoin prices and electrical energy consumption in France. |
Keywords: | Autoregressive Process , FAR(1) , Unit root , Bootstrap , Computational Statistics , Hypothesis test , Principal Components |
Date: | 2015–04 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:ws1506&r=ecm |
By: | Peter Reinhard Hansen (European University Institute and CREATES) |
Abstract: | We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful for the analysis of time series that are confined to a grid, such as financial high frequency data. |
Keywords: | Markov Chain; Martingale; Beveridge-Nelson Decomposition |
JEL: | C10 C22 C58 |
Date: | 2015–04–01 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2015-18&r=ecm |
By: | Wei Qian; Craig A. Rolling; Gang Cheng; Yuhong Yang |
Abstract: | It is often reported in forecast combination literature that a simple average of candidate forecasts is more robust than sophisticated combining methods. This phenomenon is usually referred to as the "forecast combination puzzle". Motivated by this puzzle, we explore its possible explanations including estimation error, invalid weighting formulas and model screening. We show that existing understanding of the puzzle should be complemented by the distinction of different forecast combination scenarios known as combining for adaptation and combining for improvement. Applying combining methods without consideration of the underlying scenario can itself cause the puzzle. Based on our new understandings, both simulations and real data evaluations are conducted to illustrate the causes of the puzzle. We further propose a multi-level AFTER strategy that can integrate the strengths of different combining methods and adapt intelligently to the underlying scenario. In particular, by treating the simple average as a candidate forecast, the proposed strategy is shown to avoid the heavy cost of estimation error and, to a large extent, solve the forecast combination puzzle. |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1505.00475&r=ecm |
By: | James Rockey; Jonathan Temple |
Abstract: | The issue of model uncertainty is central to the empirical study of economic growth. Many recent papers use Bayesian Model Averaging to address model uncertainty, but Ciccone and Jarociński (2010) have questioned the approach on theoretical and empirical grounds. They argue that a standard ‘agnostic’ approach is too sensitive to small changes in the dependent variable, such as those associated with different vintages of the Penn World Table (PWT). This paper revisits their theoretical arguments and empirical illustration, drawing on more recent vintages of the PWT, and introducing an approach that limits the degree of agnosticism. |
Keywords: | Bayesian Model Averaging, Growth Regressions, Growth Econometrics. |
JEL: | C51 O40 O47 |
Date: | 2015–05–05 |
URL: | http://d.repec.org/n?u=RePEc:bri:uobdis:15/656&r=ecm |
By: | Damien Challet |
Abstract: | Sharpe ratios are much used in finance, yet cannot be measured directly because price returns are non-Gaussian. On the other hand, the number of records of a discrete-time random walk in a given time-interval follows a Gaussian distribution provided that its increment distribution has finite variance. As as consequence, record statistics of uncorrelated, biased, random walks provide an attractive new estimator of Sharpe ratios. First, I derive an approximate expression of the expected number of price records in a given time interval when the increments follow Student's t distribution with tail exponent equal to 4 in the limit of vanishing Sharpe ratios. Remarkably, this expression explicitly links the expected record numbers to Sharpe ratios and and suggests to estimate the average Sharpe ratio from record statistics. Numerically, the asymptotic efficiency of a permutation estimator of Sharpe ratios based on record statistics is several times larger than that of the t-statistics for uncorrelated returns with a Student's t distribution with tail exponent of 4. |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1505.01333&r=ecm |
By: | Sohei Kaihatsu (Bank of Japan); Jouchi Nakajima (Bank of Japan) |
Abstract: | This paper proposes a new econometric framework for estimating trend inflation and the slope of the Phillips curve with a regime-switching model. As a unique aspect of our approach, we assume regimes for the trend inflation at one-percent intervals, and estimate the probability of the trend inflation being in each regime. The trend inflation described in the discrete manner provides for an easily interpretable explanation of estimation results as well as a robust estimate. An empirical result indicates that Japan's trend inflation stayed at zero percent for about 15 years after the late 1990s, and then shifted away from zero percent after the introduction of the price stability target and the quantitative and qualitative monetary easing. The U.S. result shows a considerably stable trend inflation at two percent since the late 1990s. |
Keywords: | Phillips curve; Regime-switching model; Trend inflation |
JEL: | C22 E31 E42 E52 E58 |
Date: | 2015–05–01 |
URL: | http://d.repec.org/n?u=RePEc:boj:bojwps:wp15e03&r=ecm |
By: | Francisco J. Bahamonde-Birke; Uwe Kunert; Heike Link; Juan de Dios Ortúzar |
Abstract: | We provide an in-depth theoretical discussion about the differences between attitudes and perceptions, as well as an empirical exercise to analyze its effects. This discussion is of importance, as the large majority of papers considering attitudinal latent variables, just consider those as attributes affecting directly the utility of a certain alternative while systematic taste variations are rarely taken into account and perceptions are normally completely ignored. The results of our case study show that perceptions may indeed affect the decision making process and that they are able to capture a significant part of the variability that is normally explained by alternative specific constants. In the same line, our results indicate that attitudes may be a reason for systematic taste variations, and that a proper categorization of the latent variables, in accordance with the underlying theory, may outperform the customary assumption of linearity. |
Keywords: | Hybrid Discrete Choice Modelling, Latent Variables, Attitudes, Perceptions |
JEL: | C50 |
Date: | 2015 |
URL: | http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1474&r=ecm |
By: | Nicholas M. Kiefer (Cornell University, Ithaca, and CREATES); C. Erik Larson (Promontory Financial Group, LLC) |
Abstract: | Counting processes provide a very flexible framework for modeling discrete events occurring over time. Estimation and interpretation is easy, and links to more familiar approaches are at hand. The key is to think of data as "event histories," a record of times of switching between states in a discrete state space. In a simple case, the states could be default/non-default; in other models relevant for credit modeling the states could be credit scores or payment status (30 dpd, 60 dpd, etc.). Here we focus on the use of stochastic counting processes for mortgage default modeling, using data on high LTV mortgages. Borrowers seeking to finance more than 80% of a house's value with a mortgage usually either purchase mortgage insurance, allowing a first mortgage greater than 80% from many lenders, or use second mortgages. Are there differences in performance between loans financed by these different methods? We address this question in the counting process framework. In fact, MI is associated with lower default rates for both fixed rate and adjustable rate first mortgages. |
Keywords: | Econometrics, Aalen Estimator, Duration Modeling, Mortgage Insurance, Loan-to-Value |
JEL: | C51 C52 C58 C33 C35 |
Date: | 2015–04–28 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2015-17&r=ecm |
By: | Dietmar Pfeifer; Herv\'e Awoumlac Tsatedem |
Abstract: | We construct new multivariate copulas on the basis of a generalized infinite partition-of-unity approach. This approach allows - in contrast to finite partition-of-unity copulas - for tail-dependence as well as for asymmetry. A possibility of fitting such copulas to real data from quantitative risk management is also pointed out. |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1505.00288&r=ecm |