|
on Econometrics |
By: | Yu-Chin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan) |
Abstract: | We construct a Kolmogorov-Smirnov test for the null hypothesis that the conditional average treatment effect is non-negative conditional on every possible value of the covariates. The null hypothesis can be characterized as a conditional moment inequality under the unconfoundedness assumption, and we employ the instrumental variable method in Andrews and Shi (2013) to convert the conditional moment inequality into an infinite number of unconditional moment inequalities without information loss. A Kolmogorov- Smirnov test is constructed based on these unconditional moment inequalities. It is shown that our test can control the size uniformly over a broad set of data generating processes asymptotically, is consistent against fixed alternatives and is unbiased against some N−1/2 local alternatives. We also consider several extensions of our test. |
Keywords: | Hypothesis testing, treatment effects, test consistency, propensity score |
JEL: | C01 C12 C21 |
Date: | 2013–03 |
URL: | http://d.repec.org/n?u=RePEc:sin:wpaper:13-a003&r=ecm |
By: | Millimet, Daniel L. (Southern Methodist University); McDonough, Ian K. (Southern Methodist University) |
Abstract: | With the increased availability of longitudinal data, dynamic panel data models have become commonplace. Moreover, the properties of various estimators of such models are well known. However, we show that these estimators breakdown when the data are irregularly spaced along the time dimension. Unfortunately, this is an increasingly frequent occurrence as many longitudinal surveys are collected at non-uniform intervals and no solution is currently available when time-varying covariates are included in the model. In this paper, we propose several new estimators for dynamic panel data models when data are irregularly spaced and compare their finite sample performance to the naïve application of existing estimators. We illustrate the practical importance of this issue by turning to two applications on early childhood development. |
Keywords: | panel data, irregular spacing, interactive fixed effects, student achievement, obesity |
JEL: | C23 C51 I21 |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp7359&r=ecm |
By: | Sebastian Giesen; Rolf Scheufele |
Abstract: | In this paper we analyze the small sample properties of full information and limited information estimators in a potentially misspecified DSGE model. Therefore, we conduct a simulation study based on a standard New Keynesian model including price and wage rigidities. We then study the effects of omitted variable problems on the structural parameters estimates of the model. We find that FIML performs superior when the model is correctly specified. In cases where some of the model characteristics are omitted, the performance of FIML is highly unreliable, whereas GMM estimates remain approximately unbiased and significance tests are mostly reliable. |
Keywords: | FIML, GMM, finite sample bias, misspecification, Monte Carlo, DSGE |
JEL: | C26 C36 C51 E17 |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:iwh:dispap:8-13&r=ecm |
By: | Yoshitsugu Kitazawa (Faculty of Economics, Kyushu Sangyo University) |
Abstract: | This paper proposes the transformations for the dynamic fixed effects logit models. Firstly, the transformations construct the valid moment conditions (including the stationarity moment conditions) for the case without explanatory variable. Combining portions of the valid moment conditions gives just the first-order condition of the conditional MLE proposed by Chamberlain (1985). Next, the valid moment conditions are constructed by using the transformations for the case with strictly exogenous continuous explanatory variables, when the number of time periods is greater than or equal to four. This implies that for the dynamic fixed effects logit model with strictly exogenous continuous explanatory variables, the estimators can be constructed which are consistent and asymptotically normal and whose convergence rates equal the inverse of the square root of the cross-sectional sample size. In addition, the small sample properties of the GMM estimators using these moment conditions are investigated by using Monte Carlo experiments. |
Keywords: | dynamic fixed effects logit models; moment conditions; stationarity; strictly exogenous continuous explanatory variables; root-N consistent estimators; Monte Carlo experiments |
JEL: | C23 C25 |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:kyu:dpaper:60&r=ecm |
By: | Johannes, Jan |
Abstract: | We consider the estimation of the value of a linear functional of the slope parameter in functional linear regression, where scalar responses are modeled in dependence of random functions. In Johannes and Schenk [2010] it has been shown that a plug-in estimator based on dimension reduction and additional thresholding can attain minimax optimal rates of convergence up to a constant. However, this estimation procedure requires an optimal choice of a tuning parameter with regard to certain characteristics of the slope function and the covariance operator associated with the functional regressor. As these are unknown in practice, we investigate a fully data-driven choice of the tuning parameter based on a combination of model selection and Lepski’s method, which is inspired by the recent work of Goldenshluger and Lepski [2011]. The tuning parameter is selected as the minimizer of a stochastic penalized contrast function imitating Lepski’s method among a random collection of admissible values. We show that this adaptive procedure attains the lower bound for the minimax risk up to a logarithmic factor over a wide range of classes of slope functions and covariance operators. In particular, our theory covers point-wise estimation as well as the estimation of local averages of the slope parameter. |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:ner:louvai:info:hdl:2078.1/127330&r=ecm |
By: | Tommaso Proietti (University of Rome "Tor Vergata"); Alessandra Luati (University of Bologna) |
Abstract: | The exponential model for the spectrum of a time series and its fractional extensions are based on the Fourier series expansion of the logarithm of the spectral density. The coefficients of the expansion form the cepstrum of the time series. After deriving the cepstrum of important classes of time series processes, also featuring long memory, we discuss likelihood inferences based on the periodogram, for which the estimation of the cepstrum yields a generalized linear model for exponential data with logarithmic link, focusing on the issue of separating the contribution of the long memory component to the log-spectrum. We then propose two extensions. The first deals with replacing the logarithmic link with a more general Box-Cox link, which encompasses also the identity and the inverse links: this enables nesting alternative spectral estimation methods (autoregressive, exponential, etc.) under the same likelihood-based framework. Secondly, we propose a gradient boosting algorithm for the estimation of the log-spectrum and illustrate its potential for distilling the long memory component of the log-spectrum. |
Keywords: | Frequency Domain Methods; Generalized linear models; Long Memory; Boosting. |
Date: | 2013–04–19 |
URL: | http://d.repec.org/n?u=RePEc:rtv:ceisrp:272&r=ecm |
By: | Segers, Johan |
Abstract: | Multivariate extreme-value analysis is concerned with the extremes in a multivariate random sample, that is, points of which at least some components have exceptionally large values. Mathematical theory suggests the use of max-stable models for univariate and multivariate extremes. A comprehensive account is given of the various ways in which max-stable models are described. Furthermore, a construction device is proposed for generating parametric families of max-stable distributions. Although the device is not new, its role as a model generator seems not yet to have been fully exploited. |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:ner:louvai:info:hdl:2078.1/127113&r=ecm |
By: | Demiris, Nikolaos; Kypraios, Theodore; Smith, L. Vanessa |
Abstract: | This paper proposes a framework for modelling financial contagion that is based on SIR (Susceptible-Infected-Recovered) transmission models from epidemic theory. This class of models addresses two important features of contagion modelling, which are a common shortcoming of most existing empirical approaches, namely the direct modelling of the inherent dependencies involved in the transmission mechanism, and an associated canonical measure of crisis severity. The proposed methodology naturally implies a control mechanism, which is required when evaluating prospective immunisation policies that intend to mitigate the impact of a crisis. It can be implemented not only as a way of learning from past experiences, but also at the onset of a contagious financial crisis. The approach is illustrated on a number of currency crisis episodes, using both historical final outcome and temporal data. The latter require the introduction of a novel hierarchical model that we call the Hidden Epidemic Model (HEM), and which embeds the stochastic financial epidemic as a latent process. The empirical results suggest, among others, an increasing trend for global transmission of currency crises over time. |
Keywords: | Financial crisis, contagion, stochastic epidemic model, random graph, MCMC |
JEL: | C11 C15 C51 G01 G15 |
Date: | 2012–11 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:46693&r=ecm |
By: | Kyoo il Kim; Amil Petrin |
Abstract: | We develop simple tests for endogenous prices arising from omitted demand factors in discrete choice models. Our approach only requires one to locate testing proxies that have some correlation with the omitted factors when prices are endogenous. We use the difference between prices and their predicted values given observed demand and supply factors. If prices are exogenous, these proxies should not explain demand given prices and other explanatory variables. We reject exogeneity if these proxies enter significantly in utility as additional explanatory variables. The tests are easy to implement as we show with several Monte Carlos and discuss for three recent demand applications. |
JEL: | C3 L0 |
Date: | 2013–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:19011&r=ecm |
By: | Baldwin, Kate; Bhavnani, Rikhil R. |
Abstract: | .Ancillary experiments. are a new technique whereby researchers use a completed experiment conducted by others to recover causal estimates of a randomized intervention on new outcomes. The method requires pairing new outcome data with randomized treatment |
Keywords: | experimental methods, government performance, ancillary experiments, downstream experiments, causal inference, research design |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:unu:wpaper:wp2013-024&r=ecm |
By: | In Choi (Department of Economics, Sogang University, Seoul) |
Abstract: | This paper derives Akaike?s (1973) Akaike information criterion (AIC), Hur- vich and Tsai?s (1989) corrected AIC, the Bayesian information criterion (BIC) of Akaike (1978) and Schwarz (1978), and Hannan and Quinn?s (1979) informa- tion criterion for factor models and studies the consistency properties of these information criteria. It also reports extensive simulation results comparing the performance of the extant and new procedures for the selection of the number of factors. The data generating process for the simulation consists of serially cor- related factors and serially and cross-sectionally correlated idiosyncratic errors. The idiosyncratic errors are either homoskedastic or heteroskedastic. Idiosyn- cratic errors with fat tails and those with outliers having a much larger variance than the rest of the errors are also considered. The simulation results show the di¡Ë culty of determining which criterion performs best. In practice, it is advisable to consider several criteria at the same time, especially BIC, Hannan and Quinn?s information criterion, Bai and Ng?s (2002) IC p2 and BIC3, and Onatski?s (2010) and Ahn and Horenstein?s (2009) eigenvalue-based criteria. The model-selection criteria considered in this paper are also applied to Stock and Watson?s (2002, 2005) data sets. The results di¢´er considerably depending on the model-selectioncriterion in use, but evidence suggesting four factors for Stock and Watson?s (2002) data and six or seven factors for Stock and Watson?s (2005) is obtainable. |
Keywords: | Factor model, Akaike information criterion, corrected Akaike information criterion, Bayesian information criterion, Hannan and Quinn?s (1979) information criterion |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:sgo:wpaper:1209&r=ecm |
By: | Gregory Schehr; Satya N. Majumdar |
Abstract: | While records and order statistics of independent and identically distributed (i.i.d.) random variables X_1, ..., X_N are fully understood, much less is known for strongly correlated random variables, which is often the situation encountered in statistical physics. Recently, it was shown, in a series of works, that one-dimensional random walk (RW) is an interesting laboratory where the influence of strong correlations on records and order statistics can be studied in detail. We review here recent exact results which have been obtained for these questions about RW, using techniques borrowed from the study of first-passage problems. We also present a brief review of the well known (and not so well known) results for records and order statistics of i.i.d. variables. |
Date: | 2013–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1305.0639&r=ecm |