|
on Econometrics |
By: | Michael Creel; Dennis Kristensen |
Abstract: | Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties. |
Keywords: | Approximate Bayesian Computation; Indirect Inference; maximum-likelihood; simulation-based methods. |
JEL: | C13 C14 C15 C33 |
Date: | 2013–06–12 |
URL: | http://d.repec.org/n?u=RePEc:aub:autbar:931.13&r=ecm |
By: | Esmeralda A. Ramalho (Departamento de Economia and CEFAGE-UE, Universidade de Évora); Joaquim J.S. Ramalho (Departamento de Economia and CEFAGE-UE, Universidade de Évora); José M.R. Murteira (Faculdade de Economia, Universidade de Coimbra, and CEMAPRE) |
Abstract: | This paper proposes a new conditional mean test to assess the validity of binary and fractional parametric regression models. The new test checks the joint significance of two simple functions of the fitted index and is based on a very flexible parametric generalization of the postulated model. A Monte Carlo study reveals a promising behaviour for the new test, which compares favourably with that of the well-known RESET test as well as with tests where the alternative model is nonparametric. |
Keywords: | Binary data; Fractional data; Conditional mean tests; LM tests. |
JEL: | C12 C25 |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:cfe:wpcefa:2013_09&r=ecm |
By: | Enrique Moral-Benito (Banco de España); Luis Serven (The World Bank) |
Abstract: | For reasons of empirical tractability, analysis of cointegrated economic time series is often developed in a partial setting, in which a subset of variables is explictly modeled conditional on the rest. This approach yields valid inference only if the conditioning variables are weakly exogenous for the parameters of interest. This paper proposes a new test of weak exogeneity in panel cointegration models. The test has a limiting Gumbel distribution that is obtained by first letting T → ∞ and then letting N → ∞. We evaluate the accuracy of the asymptotic approximation in finite samples via simulation experiments. Finally, as an empirical illustration, we test weak exogeneity of disposable income and wealth in aggregate consumption |
Keywords: | panel data, cointegration, weak exogeneity, Monte Carlo methods |
JEL: | C23 C32 |
Date: | 2013–05 |
URL: | http://d.repec.org/n?u=RePEc:bde:wpaper:1307&r=ecm |
By: | James Morley (School of Economics, The University of New South Wales); Irina B. Panovska (Washington University in St. Louis); Tara M. Sinclair (the George Washington University) |
Abstract: | Unobserved components (UC) models are widely used to estimate stochastic trends in macroeconomic time series, with the existence of a stochastic trend typically motivated by a stationarity test. However, given the small sample sizes available for most macroeconomic variables, standard Lagrange multiplier tests of stationarity will perform poorly when the data are highly persistent. To address this problem, we propose the alternative use of a likelihood ratio test of stationarity based on a UC model and demonstrate that a bootstrap version of this test has far better small-sample properties for empirically-relevant data generating processes than bootstrap versions of the standard tests. An application to U.S. real GDP produces stronger support for the presence of large permanent shocks when using the likelihood ratio test as compared to standard tests. |
Keywords: | media and democracy; corruption; defamation; chilling effect. |
JEL: | C12 C15 C22 E23 |
Date: | 2013–05 |
URL: | http://d.repec.org/n?u=RePEc:swe:wpaper:2012-41a&r=ecm |
By: | Federico Bassetti (Department of Mathematics, University of Pavia); Roberto Casarin (Department of Economics, University of Venice Cà Foscari); Fabrizio Leisen (Department of Statistics, Universidad Carlos III de Madrid) |
Abstract: | Multiple time series data may exhibit clustering over time and the clustering effect may change across different series. This paper is motivated by the Bayesian non–parametric modelling of the dependence between clustering effects in multiple time series analysis. We follow a Dirichlet process mixture approach and define a new class of multivariate dependent Pitman-Yor processes (DPY). The proposed DPY are represented in terms of a vector of stick-breaking processes which determines dependent clustering structures in the time series. We follow a hierarchical specification of the DPY base measure to accounts for various degrees of information pooling across the series. We discuss some theoretical properties of the DPY and use them to define Bayesian non–parametric repeated measurement and vector autoregressive models. We provide efficient Monte Carlo Markov Chain algorithms for posterior computation of the proposed models and illustrate the effectiveness of the method with a simulation study and an application to the United States and the European Union business cycles. |
Keywords: | Bayesian non–parametrics; Dirichlet process; Panel Time-series non–parametrics; Pitman-Yor process; Stick-breaking process; Vector autoregressive process; Repeated measurements non-parametrics |
JEL: | C11 C14 C32 |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:ven:wpaper:2013:13&r=ecm |
By: | Yoichi Arai (National Graduate Institute for Policy Studies); Hidehiko Ichimura (The University of Tokyo) |
Abstract: | We consider the problem of choosing two bandwidths simultaneously for estimating the difference of two functions at given points. When the asymptotic approximation of the mean squared error (AMSE) criterion is used, we show that minimization problem is not well-defined when the sign of the product of the second derivatives of the underlying functions at the estimated points is positive. To address this problem, we theoretically define and construct estimators of the asymptotically first-order optimal (AFO) bandwidths which are well-defined regardless of the sign. They are based on objective functions which incorporate a second-order bias term. Our approach is general enough to cover estimation problems related to densities and regression functions at interior and boundary points. We provide a detailed treatment of the sharp regression discontinuity design. |
Date: | 2013–06 |
URL: | http://d.repec.org/n?u=RePEc:ngi:dpaper:13-09&r=ecm |
By: | Anders Bredahl Kock (Aarhus University and CREATES) |
Abstract: | This paper is concerned with high-dimensional panel data models where the number of regressors can be much larger than the sample size. Under the assumption that the true parameter vector is sparse we establish finite sample upper bounds on the estimation error of the Lasso under two different sets of conditions on the covariates as well as the error terms. Upper bounds on the estimation error of the unobserved heterogeneity are also provided under the assumption of sparsity. Next, we show that our upper bounds are essentially optimal in the sense that they can only be improved by multiplicative constants. These results are then used to show that the Lasso can be consistent in even very large models where the number of regressors increases at an exponential rate in the sample size. Conditions under which the Lasso does not discard any relevant variables asymptotically are also provided. In the second part of the paper we give lower bounds on the probability with which the adaptive Lasso selects the correct sparsity pattern in finite samples. These results are then used to give conditions under which the adaptive Lasso can detect the correct sparsity pattern asymptotically. We illustrate our finite sample results by simulations and apply the methods to search for covariates explaining growth in the G8 countries. |
Keywords: | Panel data, Lasso, Adaptive Lasso, Oracle inequality, Nonasymptotic bounds, High-dimensional models, Sparse models, Consistency, Variable selection, Asymptotic sign consistency. |
JEL: | C01 C13 C23 |
Date: | 2013–12–06 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2013-20&r=ecm |
By: | Tomás del Barrio Castro (Department of Applied Economics, University of the Balearic Islands); Paulo M.M. Rodrigues (Banco de Portugal, NOVA School of Business and Economics, Universidade Nova de Lisboa, CEFAGE); A.M. Robert Taylor (Granger Centre for Time Series Econometrics, University of Nottingham) |
Abstract: | Is In this paper we provide a detailed analysis of the impact of persistent cycles on the well-known semi-parametric unit root tests of Phillips and Perron (1988, Biometrika 75, 335.346). It is shown analytically and through Monte Carlo simulations that the presence of complex (near) unit roots can severely bias the size properties of these unit root test procedures. |
Keywords: | Phillips-Perron unit root test; Non-stationarity; Serial correlation; Cyclicality; Busi- ness cycles. |
JEL: | C12 C22 |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:cfe:wpcefa:2013_11&r=ecm |
By: | Burcu Kapar; William Pouliot |
Abstract: | Many procedures have been developed that are suited to testing for multiple changes in parameters of regression models which occur at unknown times. Most notably, Brown, Durbin and Evans [11] and Dufour [15], have developed or extended existing techniques, but said extensions lack power for detecting changes (cf. Kramer, Ploberger, Alt [24] and Pouliot [32] in the intercept parameter of linear regression models. Orasch [26] has developed a stochastic process that easily accommodates testing for many change-points that occur at unknown times. A slight modification of his process is suggested here which improves the power of statistics fashioned from it. These statistics are then used to construct tests to detect multiple changes in intercept in linear regression models. It is also shown here that this slightly altered process, when weighted by appropriately chosen functions, is sensitive to detection of multiple changes in intercept that occur both early and later on in the sample, while maintaining sensitivity to changes that occur in the middle of the sample. |
Keywords: | Structural Breaks, U-Statistics, Brownian Bridge, Linear Regression Model |
JEL: | C1 C2 |
Date: | 2013–05 |
URL: | http://d.repec.org/n?u=RePEc:bir:birmec:13-13&r=ecm |
By: | Kociecki, Andrzej |
Abstract: | The aim of the paper is to study the nature of normalization in Structural VAR models. Noting that normalization is the integral part of identification of a model, we provide a general characterization of the normalization. In consequence some the easy–to–check conditions for a Structural VAR to be normalized are worked out. Extensive comparison between our approach and that of Waggoner and Zha (2003a) is made. Lastly we illustrate our approach with the help of five variables monetary Structural VAR model. |
Keywords: | Normalization, Identification, Impulse Response Function |
JEL: | C30 C32 C51 |
Date: | 2013–06–17 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:47645&r=ecm |
By: | Artuc, Erhan |
Abstract: | This paper introduces a computationally efficient method for estimating structural parameters of dynamic discrete choice models with large choice sets. The method is based on Poisson pseudo maximum likelihood (PPML) regression, which is widely used in the international trade and migration literature to estimate the gravity equation. Unlike most of the existing methods in the literature, it does not require strong parametric assumptions on agents'expectations, thus it can accommodate macroeconomic and policy shocks. The regression requires count data as opposed to choice probabilities; therefore it can handle sparse decision transition matrices caused by small sample sizes. As an example application, the paper estimates sectoral worker mobility in the United States. |
Keywords: | Economic Theory&Research,Science Education,Scientific Research&Science Parks,Statistical&Mathematical Sciences,Econometrics |
Date: | 2013–06–01 |
URL: | http://d.repec.org/n?u=RePEc:wbk:wbrwps:6480&r=ecm |
By: | Antonio Lijoi (Department of Economics and Management, University of Pavia and Collegio Carlo Alberto); Bernardo Nipoti (University of Turin and Collegio Carlo Alberto); Igor Prünster (University of Turin and Collegio Carlo Alberto) |
Abstract: | Most of the Bayesian nonparametric models for non–exchangeable data that are used in applications are based on some extension to the multivariate setting of the Dirichlet process, the best known being MacEachern’s dependent Dirichlet process. A comparison of two recently introduced classes of vectors of dependent nonparametric priors, based on the Dirichlet and the normalized s–stable processes respectively, is provided. These priors are used to define dependent hierarchical mixture models whose distributional properties are investigated. Furthermore, their inferential performance is examined through an extensive simulation study. The models exhibit different features, especially in terms of the clustering behavior and the borrowing of information across studies. Compared to popular Dirichlet process based models, mixtures of dependent normalized s–stable processes turn out to be a valid choice being capable of more effectively detecting the clustering structure featured by the data. |
Keywords: | Bayesian Nonparametrics; Dependent Process; Dirichlet process; Generalized P´olya urn scheme; Mixture models; Normalized s–stable process; Partially exchangeable random partition. |
Date: | 2013–06 |
URL: | http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0046&r=ecm |
By: | Gedikoglu, Haluk; Parcell, Joseph |
Abstract: | Previous studies that analyzed multiple imputation using survey data did not take into account the survey sampling design. The objective of the current study is to analyze the impact of survey sampling design missing data imputation, using multivariate multiple imputation method. The results of the current study show that multiple imputation methods result in lower standard errors for regression analysis than the regression using only complete observation. Furthermore, the standard errors for all regression coefficients are found to be higher for multiple imputation with taking into account the survey sampling design than without taking into account the survey sampling design. Hence, sampling based estimation leads to more realistic standard errors. |
Keywords: | Multiple Imputation, Sampling Based Estimation, Missing Data, Research Methods/ Statistical Methods, |
Date: | 2013–05 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea13:149679&r=ecm |
By: | Francesco Andreoli (THEMA University of Cergy-Pontoise and University of Verona) |
Abstract: | This note presents an innovative inference procedure for assessing if a pair of distributions can be ordered according to inverse stochastic dominance (ISD). At order 1 and 2, ISD coincides respectively with rank and generalized Lorenz dominance and it selects the preferred distribution by all social evaluation functions that are monotonic and display inequality aversion. At orders higher than the second, ISD is associated with dominance for classes of linear rank dependent evaluation functions. This paper focuses on the class of conditional single parameters Gini social evaluation functions and illustrates that these functions can be linearly decomposed into their empirically tractable influence functions. This approach gives estimators for ISD that are asymptotically normal with a variancecovariance structure which is robust to non-simple randomization sampling schemes, a common case in many surveys used in applied distribution analysis. One of these surveys, the French Labor Force Survey, is selected to test the robustness of Equality of Opportunity evaluations in France through ISD comparisons at order 3. The ISD tests proposed in this paper are operationalized through the user-written “isdtest” Stata routine. |
Keywords: | Inverse stochastic dominance, inference, influence functions, inequality. |
JEL: | C12 D31 I32 |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:inq:inqwps:ecineq2013-295&r=ecm |
By: | Ryan Greenaway-McGrevy (Bureau of Economic Analysis) |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:bea:wpaper:0100&r=ecm |
By: | Chiara Perricone (University of Rome "Tor Vergata") |
Abstract: | Many papers have highlighted that some macroeconomic time series present structural instability. The causes of these remarkable changes in the reduced form properties of the macroeconomy is a debated argument. In literature this issue is handled with three main econometric methodologies: structural breaks, regime-switching and time-varying parameters (TVP). Nevertheless all these approaches need some ex ante structure in order to model the change. Based on the Recurrent Chinese Restaurant Process, I have specified a model for an autoregressive process and estimated via particle filter using a conjugate prior, which applied the idea of evolutionary cluster to the study of the instability in output and inflation for US after War World II. This procedure displays some advantages, in particular does not require a strong ex ante structure in order to neither detect the breaks nor manage the evolution of parameters. The application of the cluster procedure to GDP growth and inflation rate for US from 1957 to 2011 shows a good ability in fit the data, moreover it produces a clusterization of the time series that could be interpreted in terms of economic history and it is able to recover key data features without making restrictive assumptions, as in âone-breakâ or TVP models. |
JEL: | C18 C22 C51 E17 |
Date: | 2013–06–11 |
URL: | http://d.repec.org/n?u=RePEc:rtv:ceisrp:283&r=ecm |
By: | Raoul Golan; Austin Gerig |
Abstract: | Financial time series exhibit a number of interesting properties that are difficult to explain with simple models. These properties include fat-tails in the distribution of price fluctuations (or returns) that are slowly removed at longer timescales, strong autocorrelations in absolute returns but zero autocorrelation in returns themselves, and multifractal scaling. Although the underlying cause of these features is unknown, there is growing evidence they originate in the behavior of volatility, i.e., in the behavior of the magnitude of price fluctuations. In this paper, we posit a feedback mechanism for volatility that reproduces many of the non-trivial properties of empirical prices. The model is parsimonious, requires only two parameters to fit a specific financial time series, and can be grounded in a straightforward framework where volatility fluctuations are driven by the estimation error of an exogenous Poisson rate. |
Date: | 2013–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1306.4975&r=ecm |
By: | McIntosh, Christopher S.; Mittelhammer, Ron C.; Middleton, Jonathan N. |
Abstract: | Ever since 1927, when Al Jolson spoke in the first “talkie” film The Jazz Singer, there had been little doubt that sound added a valuable perceptual dimension to visual media. However, despite the advances of over 80 years, and the complete integration of sound and vision that has occurred in entertainment applications, the use of sound to channel data occurring in everyday life has remained rather primitive, limited to such things as computer beeps and jingles for certain mouse and key actions, low battery alarms on a mobile devices, and other sounds that simply indicate when some trigger state has been reached – the information content of such sounds is not high. Non-binary, but still technically rather simple data applications include the familiar rattling sound of a Geiger counter, talking clocks and thermometers, or the sound output of a hospital EKG machine. What if deletion of larger and/or more recently accessed computer files resulted in a more complex sound than for deleting smaller or rarely accessed files, increasing the user’s awareness of the loss of larger or more recent work efforts? All of these are examples of data sonification. While sonification seems to be pursued mostly by those wishing to generate tuneful results, many are undertaking the process to simply provide another method of presenting data. Many examples are available at https://soundcloud.com/tags/sonification including some very tuneful arrangements of the Higgs Boson. Indeed, with complex data series one can often hear patterns or persistent pitches that would be difficult to show visually. Musical pitches are periodic components of sound and repetition over time can be readily discerned by the listener. Sonification techniques have been applied to a variety of topics (Pauletto and Hunt, 2009; Scaletti and Craig 1991; Sturm, 2005; Dunn and Clark, 1999). To the authors’ knowledge, Sonification has yet to be applied in any substantive way to economic data. Our goal is not to produce tuneful results. Rather, the purpose of this paper is to explore the potential application of Sonification techniques for informing and assessing the specification of econometric models for representing economic data outcomes. The purpose of this seminal and exploratory analysis is to investigate whether there appears to be significant promise in adding the data sonification approach to the empirical economists’ toolkit for interpreting economic data and specifying econometric models. In particular, is there an advantage to using both the empirical analyst’s eyes and ears when investigating empirical economic problems? |
Keywords: | Research and Development/Tech Change/Emerging Technologies, Research Methods/ Statistical Methods, C01, C18, C52, |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea13:150702&r=ecm |
By: | Chavas, Jean-Paul; Kim, Kwansoo |
Abstract: | This paper investigates the nonparametric analysis of technology under non-convexity. The analysis extends two approaches now commonly used in efficiency and productivity analysis: Data Envelopment Analysis (DEA) where convexity is imposed; and Free Disposal Hull (FDH) models. We argue that, while the FDH model allows for non-convexity, its representation of non-convexity is too extreme. We propose a new nonparametric model that relies on a neighborhood-based technology assessment which allows for less extreme forms of non-convexity. The distinctive feature of our approach is that it allows for non-convexity to arise in any part of the feasible set. We show how it can be implemented empirically by solving simple linear programming problems. And we illustrate the usefulness of the approach in an empirical application to the analysis of technical and scale efficiency on Korean farms. |
Keywords: | technology, productivity, nonparametric, non-convexity, Farm Management, Production Economics, Productivity Analysis, Research Methods/ Statistical Methods, |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea13:149684&r=ecm |
By: | Tobias Scholl (House of Logistics and Mobility (HOLM), Frankfurt and Economic Geography and Location Research, Philipps-University, Marburg); Thomas Brenner (Philipps-Universität Marburg) |
Abstract: | Distance-based methods for measuring spatial concentration such as the Duranton-Overman index undergo an increasing popularity in the spatial econometrics community. However, a limiting factor for their usage is their computational complexity since both their memory requirements and running-time are in O(n2). In this paper, we present an algorithm with constant memory requirements and an improved running time, enabling the Duranton-Overman index and related distance-based methods to run big data analysis. Furthermore, we discuss the index by Scholl and Brenner (2012) whose mathematical concept allows an even faster computation for large datasets than the improved algorithm does. |
Keywords: | Spatial concentration, Duranton-Overman index, big-data analysis, MAUP, distance-based measures. |
JEL: | C40 M13 R12 |
Date: | 2013–10–06 |
URL: | http://d.repec.org/n?u=RePEc:pum:wpaper:2013-09&r=ecm |