|
on Econometrics |
By: | Marx, Benjamin M. |
Abstract: | Abstract An increasingly common technique for studying behavioral elasticities uses bunching estimation of the excess mass in a distribution around a price or policy threshold. This paper shows how serial dependence of the choice variable and extensive-margin responses may bias these estimates. It then proposes new bunching designs that take greater advantage of panel data to avoid these biases and estimate new parameters. Standard methods over-reject in simulations using household income data and over-estimate bunching in an application with charities. Designs exploiting panel data provide unbiased bunching estimates, improved heterogeneity analysis, and the ability to estimate extensive-margin responses and long-run effects. |
Keywords: | bunching, estimation, panel |
JEL: | C23 D92 H21 |
Date: | 2018–08–23 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:88647&r=ecm |
By: | Zheng, Y.; Gohin, A. |
Abstract: | This paper proposes a generalized maximum entropy (GME) approach to estimate nonlinear dynamic stochastic decision models. For these models, the state variables are latent and a solution process is required to obtain the state space representation. To our knowledge, this method has not been used to estimate dynamic stochastic general equilibrium (DSGE) or DSGE-like models. Based on the Monte Carlo experiments with simulated data, we show that the GME approach yields precise estimation for the unknown structural parameters and the structural shocks. In particular, the preference parameter which captures the risk preference and the intertemporal preference is also relatively precisely estimated. Compare to the more widely used filtering methods, the GME approach provides a similar accuracy level but much higher computational efficiency for nonlinear models. Moreover, the proposed approach shows favorable properties for small sample size data. |
Keywords: | Agricultural and Food Policy |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:ags:iaae18:276001&r=ecm |
By: | Giuseppe Arbia (Catholic University of the Sacred Heart, Rome, Department of Statistical Science); Anna Gloria Billé (Free University of Bozen-Bolzano, Faculty of Economics and Management) |
Abstract: | Modeling individual choices is one of the main aim in microeconometrics. Discrete choice models have been widely used to describe economic agents' utility functions, and most of them play a paramount role in applied health economics. On the other hand, spatial econometrics collects a series of econometric tools which are particularly useful when we deal with spatially-distributed data sets. It has been demonstrated that accounting for spatial dependence can avoid inconsistency problems of the commonly used estimators. However, the complex structure of spatial dependence in most of the nonlinear models still precludes a large diffusion of these spatial techniques. The purpose of this paper is then twofold. The former is to review the main methodological problems and their different solutions in spatial discrete choice modeling as they have appeared in the econometric literature. The latter is to review their applications to health issues, especially in the last few years, by highlighting at least two main reasons why spatial discrete neighboring effects should be considered and then suggesting possible future lines of the development of this emerging field. |
Keywords: | Discrete Choice Modeling, Health Economics, Spatial Econometrics |
JEL: | C31 C35 C51 I10 |
Date: | 2018–09 |
URL: | http://d.repec.org/n?u=RePEc:bzn:wpaper:bemps54&r=ecm |
By: | Autcha Araveeporn (King Mongkut?s Institute of Technology Ladkrabang) |
Abstract: | The goal of this research is to estimate the parameter of logistic regression model. The coefficient parameter is evaluated by maximum likelihood, ridge regression, markov chain monte carlo methods. The logistic regression is considered the correlation between binary dependent variable and 2, 3, and 4 independent variables which is generated from normal distribution, contaminated normal distribution, and t distribution. The maximum likelihood estimator is estimated by differential the log likelihood function with respect to the coefficients. Ridge regression is to choose the unknown ridge parameter by cross-validation, so ridge estimator is evaluated on a form of maximum likelihood method by adding ridge parameter. The markov chain monte carlo estimator can approximate from Gibbs sampling algorithm by the posterior distribution based on a probability distribution and prior probability distribution. The performance of these method is compare by percentage of predicted accuracy value. The results are found that ridge regression are satisfied when the independent variables are simulated from normal distribution, and the maximum likelihood outperforms on the other distributions. |
Keywords: | Maximum Likelihood, Ridge Regression, Markov Chain Monte Carlo |
JEL: | C13 C15 |
Date: | 2018–06 |
URL: | http://d.repec.org/n?u=RePEc:sek:iacpro:6409196&r=ecm |
By: | Daouia, Abdelaati; Girard, Stéphane; Stupfler, Gilles |
Abstract: | Expectiles define a least squares analogue of quantiles. They are determined by tail expectations rather than tail probabilities. For this reason and many other theoretical and practical merits, expectiles have recently received a lot of attention, especially in actuarial and financial risk management. Their estimation, however, typically requires to consider non-explicit asymmetric least squares estimates rather than the traditional order statistics used for quantile estimation. This makes the study of the tail expectile process a lot harder than that of the standard tail quantile process. Under the challenging model of heavy-tailed distributions, we derive joint weighted Gaussian approximations of the tail empirical expectile and quantile processes. We then use this powerful result to introduce and study new estimators of extreme expectiles and the standard quantile-based expected shortfall, as well as a novel expectile-based form of expected shortfall. Our estimators are built on general weighted combinations of both top order statistics and asymmetric least squares estimates. Some numerical simulations and applications to actuarial and financial data are provided. |
Keywords: | Asymmetric least squares; Coherent risk measures; Expected shortfall; Expectile; Extrapolation; Extremes; Heavy tails; Tail index |
Date: | 2018–08 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:32890&r=ecm |
By: | Asai, M.; Peiris, S.; McAleer, M.J.; Allen, D.E. |
Abstract: | Recent developments in econometric methods enable estimation and testing of general long memory process, which include the general Gegenbauer process. This paper considers the error correction model for a vector general long memory process, which encompasses the vector autoregressive fractionally-integrated moving average and general Gegenbauer process. We modify the tests for unit roots and cointegration, based on the concept of heterogeneous autoregression. The Monte Carlo simulations show that the finite sample properties of the modified tests are satisfactory, while the conventional tests suffer from size distortion. Empirical results for interest rates series for the U.S.A. and Australia indicate that: (1) the modified unit root test detected unit roots for all series, (2) after differencing, all series favour the general Gegenbauer process, (3) the modified test for cointegration found only two cointegrating vectors, and (4) the zero interest rate policy in the U.S.A. has no effect on the cointegrating vector for the two countries |
Keywords: | Long Memory Processes, Gegenbauer Process, Dickey-Fuller Tests, Cointegration, Differencing, Interest Rates |
JEL: | C22 C32 C51 |
Date: | 2018–08–01 |
URL: | http://d.repec.org/n?u=RePEc:ems:eureir:110018&r=ecm |
By: | Anna Gloria Billé (Free University of Bolzano‐Bozen, Faculty of Economics, Italy); Leopoldo Catania (Aarhus University, Department of Economics and Business Economics and CREATES, Denmark) |
Abstract: | We propose a new spatio-temporal model with time-varying spatial weighting matrices. We allow for a general parameterization of the spatial matrix, such as: (i) a function of the inverse distances among pairs of units to the power of an unknown time-varying distance decay parameter, and (ii) a negative exponential function of the time-varying parameter as in (i). The filtering procedure of the time-varying parameters is performed using the information in the score of the conditional distribution of the observables. An extensive Monte Carlo simulation study to investigate the finite sample properties of the ML estimator is reported. We analyze the association between eight European countries' perceived risk, suggesting that the economically strong countries have their perceived risk increased due to their spatial connection with the economically weaker countries, and we investigates the evolution of the spatial connection between the house prices in different areas of the UK, identifying periods when the usually adopted sparse weighting matrix is not sufficient to describe the underlying spatial process. |
Keywords: | Dynamic spatial autoregressive models, Time-varying weighting matrices, Distance decay functions |
JEL: | C33 C61 C58 |
Date: | 2018–09 |
URL: | http://d.repec.org/n?u=RePEc:bzn:wpaper:bemps55&r=ecm |
By: | Jagjit S. Chadha (Centre for Macroeconomics (CFM); National Institute of Economic and Social Research (NIESR)); Katsuyuki Shibayama (University of Kent) |
Abstract: | Koop, Pesaran and Smith (2013) suggest a simple diagnostic indicator for the Bayesian estimation of the parameters of a DSGE model. They show that, if a parameter is well identified, the precision of the posterior should improve as the (artificial) data size T increases, and the indicator checks the speed at which precision improves. As it does not require any additional programming, a researcher just needs to generate artiÖcial data and estimate the model with increasing sample size, T. We apply this indicator to the benchmark Smets and Wouters' (2007) DSGE model of the US economy, and suggest how to implement this indicator on DSGE models. |
Keywords: | Bayesian estimation, Dynamic stochastic general equilibrium |
JEL: | C51 C52 E32 |
Date: | 2018–09 |
URL: | http://d.repec.org/n?u=RePEc:cfm:wpaper:1825&r=ecm |
By: | Carvalho, Carlos (Central Bank of Brazil); Nechio, Fernanda (Federal Reserve Bank of San Francisco); Tristao, Tiago (Opus [Organization]) |
Abstract: | Ordinary Least Squares (OLS) estimation of monetary policy rules produces potentially inconsistent estimates of policy parameters. The reason is that central banks react to variables, such as inflation and the output gap, which are endogenous to monetary policy shocks. Endogeneity implies a correlation between regressors and the error term, and hence, an asymptotic bias. In principle, Instrumental Variables (IV) estimation can solve this endogeneity problem. In practice, IV estimation poses challenges as the validity of potential instruments also depends on other economic relationships. We argue in favor of OLS estimation of monetary policy rules. To that end, we show analytically in the three-equation New Keynesian model that the asymptotic OLS bias is proportional to the fraction of the variance of regressors accounted for by monetary policy shocks. Using Monte Carlo simulation, we then show that this relationship also holds in a quantitative model of the U.S. economy. As monetary policy shocks explain only a small fraction of the variance of regressors typically included in monetary policy rules, the endogeneity bias is small. Using simulations, we show that, for realistic sample sizes, the OLS estimator of monetary policy parameters outperforms IV estimators. |
JEL: | E47 E50 E52 E58 |
Date: | 2018–09–06 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedfwp:2018-11&r=ecm |
By: | Dario Buono; George Kapetanios; Massimiliano Marcellino; Gianluigi Mazzi; Fotis Papailias |
Abstract: | This paper aims at providing a primer on the use of big data in macroeconomic nowcasting and early estimation. We discuss: (i) a typology of big data characteristics relevant for macroeconomic nowcasting and early estimates, (ii) methods for features extraction from unstructured big data to usable time series, (iii) econometric methods that could be used for nowcasting with big data, (iv) some empirical nowcasting results for key target variables for four EU countries, and (v) ways to evaluate nowcasts and ash estimates. We conclude by providing a set of recommendations to assess the pros and cons of the use of big data in a specific empirical nowcasting context. |
Keywords: | Big Data, Nowcasting, Early Estimates, Econometric Methods |
JEL: | C32 C53 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:baf:cbafwp:cbafwp1882&r=ecm |
By: | Ying Fan (University of Michigan); Christopher Sullivan |
Abstract: | This paper provides a theoretically founded empirical model to simultaneously investigate firm competition and estimate markups. The model nests the standard oligopoly model, but also allows for firm collusion. Different from conduct parameter models, our model is consistent with a series of theoretical models. We show that a nonparametric marginal cost function can be identified, which gives an estimate of markups. Through Monte Carlo simulations, we show that our approach works better in estimating markups than a standard oligopoly model or a conduct parameter model. |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:red:sed018:764&r=ecm |
By: | Victor Chernozhukov; Whitney K Newey; Rahul Singh |
Abstract: | Many objects of interest can be expressed as an L2 continuous functional of a regression, including average treatment effects, economic average consumer surplus, expected conditional covariances, and discrete choice parameters that depend on expectations. Debiased machine learning (DML) of these objects requires a learning a Riesz representer (RR). We provide here Lasso and Dantzig learners of the RR and corresponding learners of affine and other nonlinear functionals. We give an asymptotic variance estimator for DML. We allow for a wide variety of regression learners that can converge at relatively slow rates. We give conditions for root-n consistency and asymptotic normality of the functional learner. We give results for non affine functionals in addition to affine functionals. |
Date: | 2018–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1809.05224&r=ecm |
By: | Sebastian Calonico; Matias D. Cattaneo; Max H. Farrell |
Abstract: | Modern empirical work in Regression Discontinuity (RD) designs employs local polynomial estimation and inference with a mean square error (MSE) optimal bandwidth choice. This bandwidth yields an MSE-optimal RD treatment effect estimator, but is by construction invalid for inference. Robust bias corrected (RBC) inference methods are valid when using the MSE-optimal bandwidth, but we show they yield suboptimal confidence intervals in terms of coverage error. We establish valid coverage error expansions for RBC confidence interval estimators and use these results to propose new inference-optimal bandwidth choices for forming these intervals. We find that the standard MSE-optimal bandwidth for the RD point estimator must be shrank when the goal is to construct RBC confidence intervals with the smaller coverage error rate. We further optimize the constant terms behind the coverage error to derive new optimal choices for the auxiliary bandwidth required for RBC inference. Our expansions also establish that RBC inference yields higher-order refinements (relative to traditional undersmoothing) in the context of RD designs. Our main results cover sharp and sharp kink RD designs under conditional heteroskedasticity, and we discuss extensions to fuzzy and other RD designs, clustered sampling, and pre-intervention covariates adjustments. The theoretical findings are illustrated with a Monte Carlo experiment and an empirical application, and the main methodological results are available in \texttt{R} and \texttt{Stata} packages. |
Date: | 2018–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1809.00236&r=ecm |
By: | Billio, Monica; Caporin, Massimiliano; Frattarolo, Lorenzo; Pelizzon, Loriana |
Abstract: | We propose a spatiotemporal approach for modeling risk spillovers using time-varying proximity matrices based on observable financial networks and introduce a new bilateral specification. We study covariance stationarity and identification of the model, and analyze consistency and asymptotic normality of the quasi-maximum-likelihood estimator. We show how to isolate risk channels and we discuss how to compute target exposure able to reduce system variance. An empirical analysis on Euro-area cross-country holdings shows that Italy and Ireland are key players in spreading risk, France and Portugal are the major risk receivers, and we uncover Spain's non-trivial role as risk middleman. |
Keywords: | spatial GARCH,network,risk spillover,financial spillover |
JEL: | C58 G10 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:zbw:safewp:225&r=ecm |
By: | Xolisa Vayi (Department of Economics, Nelson Mandela University); Andrew Phiri (Department of Economics, Nelson Mandela University) |
Abstract: | This study extends the recently introduced sequential panel selection method (SPSM) to a cointegration framework which is particularly used to investigate Wagner’s law for 9 South African provinces between 2001 and 2016. We note that when applying single country/region estimates we fail to find evidence of cointegration whereas within panel regressions, cointegration effects are present for the entire dataset. In further applying the SPSM we observed significant Wagner’s effects for panels inclusive of Gauteng, Eastern Cape and Kwazulu-Natal provinces and when these provinces are excluded from the panels, cointegration effects are unobserved. |
Keywords: | Sequential Panel selection method (SPSM), cointegration, Wagner’s law, Provincial analysis, South Africa. |
JEL: | C22 C23 C52 H70 |
Date: | 2018–09 |
URL: | http://d.repec.org/n?u=RePEc:mnd:wpaper:1831&r=ecm |
By: | Francis X. Diebold; Minchul Shin |
Abstract: | Despite the clear success of forecast combination in many economic environments, several important issues remain incompletely resolved. The issues relate to selection of the set of forecasts to combine, and whether some form of additional regularization (e.g., shrinkage) is desirable. Against this background, and also considering the frequently-found good performance of simple-average combinations, we propose a LASSO-based procedure that sets some combining weights to zero and shrinks the survivors toward equality ("partially-egalitarian LASSO"). Ex-post analysis reveals that the optimal solution has a very simple form: The vast majority of forecasters should be discarded, and the remainder should be averaged. We therefore propose and explore direct subset-averaging procedures motivated by the structure of partially-egalitarian LASSO and the lessons learned, which, unlike LASSO, do not require choice of a tuning parameter. Intriguingly, in an application to the European Central Bank Survey of Professional Forecasters, our procedures outperform simple average and median forecasts – indeed they perform approximately as well as the ex-post best forecaster. |
JEL: | C53 |
Date: | 2018–08 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:24967&r=ecm |
By: | Jiang, Yuan; House, Lisa A. |
Keywords: | Research Methods/Statistical Methods, Demand and Price Analysis, Marketing |
Date: | 2017–07–03 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea17:258342&r=ecm |
By: | Ellen Garbarino (University of Sydney Business School, Department of Marketing, Abercrombie building, Sydney, NSW 2006, Australia); Robert Slonim (University of Sydney, Department of Economics, Merewether building, Sydney, NSW 2006, Australia; IZA, Bonn, Germany); Marie Claire Villeval (Univ Lyon, CNRS, GATE L-SE UMR 5824, F-69131 Ecully, France) |
Abstract: | Studying the likelihood that individuals cheat requires a valid statistical measure of dishonesty. We develop an easy empirical method to measure and compare lying behavior within and across studies to correct for sampling errors. This method estimates the full distribution of lying when agents privately observe the outcome of a random process (e.g., die roll) and can misreport what they observed. It provides a precise estimate of the mean and confidence interval (offering lower and upper bounds on the proportion of people lying) over the full distribution, allowing for a vast range of statistical inferences not generally available with existing methods. |
Keywords: | Dishonesty, lying, econometric estimation, sampling errors, experimental economics |
JEL: | C91 C81 D03 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:gat:wpaper:1816&r=ecm |
By: | Farah Zawaideh (Irbid National University); Raed Sahawneh (Irbid National University) |
Abstract: | Automatic text categorization (TC) has become one of the most interesting fields for researchers in data mining, information retrieval, web text mining, as well as natural language processing paradigms due to the vast number of new documents being retrieved for various information retrieval systems. This paper proposes a new TC technique, which classifies Arabic language text documents using the naïve Bayesian classifier attached to a genetic algorithm, model; this algorithm classifies documents by generating a random sample of chromosomes that represent documents in the corpus. The developed model aims to enhance the work of naïve Bayesian classifier through applying the genetic algorithm model. Experiment results show that the precision and recall are increased when testing higher number of documents; the precision was ranged from 0.8 to 0.97 for different testing environment; the number of genes that is placed in every chromosome is also tested and experiments show that the best value for the number of genes is 50 genes |
Keywords: | Data mining, Text classification, Genetic algorithm, Naïve Bayesian Classifier, N-gram processing |
JEL: | C80 |
Date: | 2018–06 |
URL: | http://d.repec.org/n?u=RePEc:sek:iacpro:6409186&r=ecm |
By: | Zhang, Rui |
Keywords: | Environmental Economics and Policy, Research Methods/Statistical Methods, Resource/Energy Economics and Policy |
Date: | 2017–07–03 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea17:258335&r=ecm |
By: | Song, Jingyu; Delgado, Michael; Preckel, Paul |
Keywords: | Research Methods/Statistical Methods, Resource/Energy Economics and Policy, Land Economics/Use |
Date: | 2017–06–30 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea17:258209&r=ecm |
By: | Haroon Mumtaz (Queen Mary University of London); Konstantinos Theodoridis (Bank of England) |
Abstract: | We propose an extended time-varying parameter Vector Autoregression that allows for an evolving relationship between the variances of the shocks. Using this model, we show that the relationship between the conditional variance of GDP growth and the long-term interest rate has become weaker over time in the US. Similarly, the co-movement between the variance of the long=term interest rate across the US and the UK declined over the 'Great Moderation' period. In contrast, the volatility of US and UK GDP growth appears to have become increasingly correlated in the recent past. |
Keywords: | Vector-Autoregressions, Time-varying parameters, Stochastic volatility |
JEL: | C15 C32 E32 |
Date: | 2016–11–02 |
URL: | http://d.repec.org/n?u=RePEc:qmw:qmwecw:804&r=ecm |