|
on Econometrics |
By: | Firpo, Sergio (São Paulo School of Economics) |
Abstract: | This paper presents semiparametric estimators of distributional impacts of interventions (treatment) when selection to the program is based on observable characteristics. Distributional impacts of a treatment are calculated as differences in inequality measures of the potential outcomes of receiving and not receiving the treatment. These differences are called “Inequality Treatment Effects” (ITE). The estimation procedure involves a first non-parametric step in which the probability of receiving treatment given covariates, the propensity-score, is estimated. In the second step weighted sample versions of inequality measures are computed using weights based on the estimated propensity-score. Root-N consistency, asymptotic normality, semiparametric efficiency and validity of inference based on the bootstrap are shown for the semiparametric estimators proposed. In addition of being easily implementable and computationally simple, results from a Monte Carlo exercise reveal that its good relative performance in small samples is robust to changes in the distribution of latent selection variables. Finally, as an illustration of the method, we apply the estimator to a real data set collected for the evaluation of a job training program, using several popular inequality measures to capture distributional impacts of the program. |
Keywords: | inequality measures, treatment effects, semiparametric efficiency, reweighting estimator |
JEL: | C1 C3 |
Date: | 2010–03 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp4841&r=ecm |
By: | Ji-Liang Shiu and Yingyao Hu |
Abstract: | This paper considers nonparametric identification of nonlinear dynamic models for panel data with unobserved voariates. Including such unobserved covariates may control for both the individual-specific unobserved heterogeneity and the endogeneity of the explanatory variables. Without specifying the distribution of the initial condition with the unobserved variables, we show that the models are nonparametrically identified from three periods of data. The main identifying assumption requires the evolution of the observed covariates depends on the unobserved covariates but not on the lagged dependent variable. We also propose a sieve maximum likelihood estimator (MLE) and focus on two classes of nonlinear dynamic panel data models, i.e., dynamic discrete choice models and dynamic censored models. We present the asymptotic property of the sieve MLE and investigate the finite sample properties of these sieve-based estimator through a Monte Carlo study. An intertemporal female labor force participation model is estimated as an empirical illustration using a sample from the Panel Study of Income Dynamics (PSID). |
Date: | 2010–04 |
URL: | http://d.repec.org/n?u=RePEc:jhu:papers:557&r=ecm |
By: | Pitarakis, Jean-Yves |
Abstract: | We explore the properties of a Wald type test statistic for detecting the presence of threshold effects in time series when the underlying process could be nearly integrated as opposed to having an exact unit root. We derive its limiting distribution and establish its equivalence to a normalised squared Brownian Bridge process. More importantly we show that the limiting random variable no longer depends on the noncentrality parameter characterising the nearly integrated DGP. This is an unusual occurrence which is in stark contrast with the existing literature on conducting inferences under persistent regressors where it is well known that the noncentrality parameter appears in the limiting distribution of test statistics, making them impractical for inference purposes. <br><br> Keynames; Threshold Autoregressive Models, Near Unit Root, Noncentrality Parameter, Nonlinear time series <br><br> JEL Classification: C22. |
Date: | 2010–03–01 |
URL: | http://d.repec.org/n?u=RePEc:stn:sotoec:1007&r=ecm |
By: | David F. Hendry (Department of Economics, Oxford University, Manor Rd. Building, Oxford, OX1 3UQ, United Kingdom.); Kirstin Hubrich (Research Department, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.) |
Abstract: | To forecast an aggregate, we propose adding disaggregate variables, instead of combining forecasts of those disaggregates or forecasting by a univariate aggregate model. New analytical results show the effects of changing coefficients, mis-specification, estimation uncertainty and mis-measurement error. Forecastorigin shifts in parameters affect absolute, but not relative, forecast accuracies; mis-specification and estimation uncertainty induce forecast-error differences, which variable-selection procedures or dimension reductions can mitigate. In Monte Carlo simulations, different stochastic structures and interdependencies between disaggregates imply that including disaggregate information in the aggregate model improves forecast accuracy. Our theoretical predictions and simulations are corroborated when forecasting aggregate US inflation pre- and post 1984 using disaggregate sectoral data. JEL Classification: C51, C53, E31. |
Keywords: | Aggregate forecasts, disaggregate information, forecast combination, inflation. |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20101155&r=ecm |
By: | Gabriele Fiorentini (RCEA and Università di Firenze, Italy); Enrique Sentana (CEMFI, Madrid, Spain) |
Abstract: | We derive computationally simple score tests of serial correlation in the levels and squares of common and idiosyncratic factors in static factor models. The implicit orthogonality conditions resemble the orthogonality conditions of models with observed factors but the weighting matrices refl ect their unobservability. We derive more powerful tests for elliptically symmetric distributions, which can be either parametrically or semipametrically specified, and robustify the Gaussian tests against general non-normality. Our Monte Carlo exercises assess the finite sample reliability and power of our proposed tests, and compare them to other existing procedures. Finally, we apply our methods to monthly US stock returns. |
Keywords: | ARCH, Financial returns, Kalman filter, LM tests, Predictability |
JEL: | C32 C13 C12 C14 C16 |
Date: | 2010–01 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:04_10&r=ecm |
By: | Arvid Raknerud and Øivind Skare (Statistics Norway) |
Abstract: | This paper extends the ordinary quasi-likelihood estimator for stochastic volatility models based on non-Gaussian Ornstein-Uhlenbeck (OU) processes to vector processes. Despite the fact that multivariate modeling of asset returns is essential for portfolio optimization and risk management -- major areas of financial analysis -- the literature on multivariate modeling of asset prices in continuous time is sparse, both with regard to theoretical and applied results. This paper uses non-Gaussian OU-processes as building blocks for multivariate models for high frequency financial data. The OU framework allows exact discrete time transition equations that can be represented on a linear state space form. We show that a computationally feasible quasi-likelihood function can be constructed by means of the Kalman filter also in the case of high-dimensional vector processes. The framework is applied to Euro/NOK and US Dollar/NOK exchange rate data for the period 2.1.1989-4.2.2010. |
Keywords: | multivariate stochastic volatility; exchange rates; Ornstein-Uhlenbeck processes; quasi-likelihood; factor models; state space representation |
JEL: | C13 C22 C51 G10 |
Date: | 2010–03 |
URL: | http://d.repec.org/n?u=RePEc:ssb:dispap:614&r=ecm |
By: | Francesco Audrino; Fulvio Corsi; Kameliya Filipova |
Abstract: | We propose a simple but effective estimation procedure to extract the level and the volatility dynamics of a latent macroeconomic factor from a panel of observable indicators. Our approach is based on a multivariate conditionally heteroskedastic exact factor model that can take into account the heteroskedasticity feature shown by most macroeconomic variables and relies on an iterated Kalman filter procedure. In simulations we show the unbiasedness of the proposed estimator and its superiority to different approaches introduced in the literature. Simulation results are confirmed in applications to real inflation data with the goal of forecasting long-term bond risk premia. Moreover, we find that the extracted level and conditional variance of the latent factor for inflation are strongly related to NBER business cycles. |
Keywords: | Macroeconomic variables; Exact factor model; Kalman filter; Heteroskedasticity; Forecasting bond risk premia; Inflation measures; Business cycles |
JEL: | C13 C33 C53 C82 E31 E47 |
Date: | 2010–03 |
URL: | http://d.repec.org/n?u=RePEc:usg:dp2010:2010-09&r=ecm |
By: | Jean Pietro Bonaldi |
Abstract: | This article analyzes identification problems that may arise while linearizing and solving DSGE models. A criterion is proposed to determine whether or not a set of parameters is partially identifiable, in the sense of Canova and Sala (2009), based on the computation of a basis for the null space of the Jacobian matrix of the function mapping the parameters with the coefficients in the solution of the model. |
Date: | 2010–03–28 |
URL: | http://d.repec.org/n?u=RePEc:col:000094:006859&r=ecm |
By: | Russell Cooper; John Haltiwanger; Jonathan L. Willis |
Abstract: | This paper studies capital adjustment at the establishment level. Our goal is to characterize capital adjustment costs, which are important for understanding both the dynamics of aggregate investment and the impact of various policies on capital accumulation. Our estimation strategy searches for parameters that minimize ex post errors in an Euler equation. This strategy is quite common in models for which adjustment occurs in each period. Here, we extend that logic to the estimation of parameters of dynamic optimization problems in which non-convexities lead to extended periods of investment inactivity. In doing so, we create a method to take into account censored observations stemming from intermittent investment. This methodology allows us to take the structural model directly to the data, avoiding time-consuming simulation-based methods. To study the effectiveness of this methodology, we first undertake several Monte Carlo exercises using data generated by the structural model. We then estimate capital adjustment costs for U.S. manufacturing establishments in two sectors. |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp10-04&r=ecm |
By: | Maria Grith; Wolfgang Karl Härdle; Melanie Schienle |
Abstract: | This chapter deals with nonparametric estimation of the risk neutral density. We present three different approaches which do not require parametric functional assumptions on the underlying asset price dynamics nor on the distributional form of the risk neutral density. The first estimator is a kernel smoother of the second derivative of call prices, while the second procedure applies kernel type smoothing in the implied volatility domain. In the conceptually different third approach we assume the existence of a stochastic discount factor (pricing kernel) which establishes the risk neutral density conditional on the physical measure of the underlying asset. Via direct series type estimation of the pricing kernel we can derive an estimate of the risk neutral density by solving a constrained optimization problem. The methods are compared using European call option prices. The focus of the presentation is on practical aspects such as appropriate choice of smoothing parameters in order to facilitate the application of the techniques. |
Keywords: | Risk neutral density, Pricing kernel, Kernel smoothing, Local polynomials, Series methods |
JEL: | C13 C14 G12 |
Date: | 2010–03 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2010-021&r=ecm |
By: | James Mitchell; Garratt, A., Vahey, S.P. |
Abstract: | We propose a methodology for producing density forecasts for the output gap in real time using a large number of vector autoregessions in inflation and output gap measures. Density combination utilizes a linear mixture of experts framework to produce potentially non-Gaussian ensemble densities for the unobserved output gap. In our application, we show that data revisions alter substantially our probabilistic assessments of the output gap using a variety of output gap measures derived from univariate detrending filters. The resulting ensemble produces well-calibrated forecast densities for US inflation in real time, in contrast to those from simple univariate autoregressions which ignore the contribution of the output gap. Broadening our empirical analysis to consider output gap measures derived from linear time trends, as well as more flexible trends, generates very different point estimates of the output gap. Combining evidence from both linear trends and more flexible univariate detrending filters induces strong multi-modality in the predictive densities for the unobserved output gap. The peaks associated with these two detrending methodologies indicate output gaps of opposite sign for some observations, reflecting the pervasive nature of model uncertainty in our US data. |
Date: | 2009–10 |
URL: | http://d.repec.org/n?u=RePEc:nsr:niesrd:342&r=ecm |
By: | Nikolay Gospodinov; Raymond Kan; Cesare Robotti |
Abstract: | We provide an in-depth analysis of the theoretical and statistical properties of the Hansen-Jagannathan (HJ) distance that incorporates a no-arbitrage constraint. We show that for stochastic discount factors (SDF) that are spanned by the returns on the test assets, testing the equality of HJ distances with no-arbitrage constraints is the same as testing the equality of HJ distances without no-arbitrage constraints. A discrepancy can exist only when at least one SDF is a function of factors that are poorly mimicked by the returns on the test assets. Under a joint normality assumption on the SDF and the returns, we derive explicit solutions for the HJ distance with a no-arbitrage constraint, the associated Lagrange multipliers, and the SDF parameters in the case of linear SDFs. This solution allows us to show that nontrivial differences between HJ distances with and without no-arbitrage constraints can arise only when the volatility of the unspanned component of an SDF is large and the Sharpe ratio of the tangency portfolio of the test assets is very high. Finally, we present the appropriate limiting theory for estimation, testing, and comparison of SDFs using the HJ distance with a no-arbitrage constraint. |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedawp:2010-04&r=ecm |
By: | Giuseppe Cavaliere; A. M. Robert Taylor; Carsten Trenkler |
Abstract: | In this paper we investigate the role of deterministic components and initial values in bootstrap likelihood ratio type tests of co-integration rank. A number of bootstrap procedures have been proposed in the recent literature some of which include estimated deterministic components and non-zero initial values in the bootstrap recursion while others do the opposite. To date, however, there has not been a study into the relative performance of these two alternative approaches. In this paper we fill this gap in the literature and consider the impact of these choices on both OLS and GLS de-trended tests, in the case of the latter proposing a new bootstrap algorithm as part of our analysis. Overall, for OLS de-trended tests our findings suggest that it is preferable to take the computationally simpler approach of not including estimated deterministic components in the bootstrap recursion and setting the initial values of the bootstrap recursion to zero. For GLS de-trended tests, we find that the approach of Trenkler (2009), who includes a restricted estimate of the deterministic component in the bootstrap recursion, can improve finite sample behaviour further. |
Keywords: | Co-integration; trace tests; i.i.d. bootstrap; OLS and GLS de-trending |
Date: | 2010–03 |
URL: | http://d.repec.org/n?u=RePEc:not:notgts:10/04&r=ecm |
By: | Christian Calmès (Département des sciences administratives, Université du Québec (Outaouais), et Chaire d'information financière et organisationnelle, ESG-UQAM); Denis Cormier (Département de stratégie des affaires, Université du Québec (Montréal), et Chaire d'information financière et organisationnelle, ESG-UQAM); Francois Racicot (Département des sciences administratives, Université du Québec (Outaouais), et Chaire d'information financière et organisationnelle, ESG-UQAM); Raymond Théoret (Département de stratégie des affaires, Université du Québec (Montréal), et Chaire d'information financière et organisationnelle, ESG-UQAM) |
Abstract: | We formulate well-known discretionary accruals models in an investment setting. Given that accruals basically consist of short-term investment, we introduce, (i) cash-flows, as a proxy for financial constraints and other financial markets imperfections, and (ii) Tobin’s q as a measure of capital return. Accounting data and Tobin’s q being measured with errors, we propose an econometric method based on a modified version of the Hausman artificial regression which features an optimal weighting matrix of higher moments instrumental variable estimators. The empirical results suggest that all the key parameters of the discretionary accruals models studied are biased systematically with measurement errors. |
Keywords: | Discretionary accruals; Earnings management; Investment; Measurement errors; Higher moments; Instrumental variable estimators. |
JEL: | M41 C12 D92 |
Date: | 2010–01–01 |
URL: | http://d.repec.org/n?u=RePEc:pqs:wpaper:012010&r=ecm |
By: | Massimiliano Marcellino (European University Institute, Badia Fiesolana - Via dei Roccettini 9, I-50014 San Domenico di Fiesole (FI), Italy. Bocconi University and CEPR.); Alberto Musso (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.) |
Abstract: | This paper provides evidence on the reliability of euro area real-time output gap estimates. A genuine real-time data set for the euro area is used, including vintages of several sets of euro area output gap estimates available from 1999 to 2006. It turns out that real-time estimates of the output gap are characterised by a high degree of uncertainty, much higher than that resulting from model and estimation uncertainty only. In particular, the evidence indicates that both the magnitude and the sign of the real-time estimates of the euro area output gap are very uncertain. The uncertainty is mostly due to parameter instability, while data revisions seem to play a minor role. To benchmark our results, we repeat the analysis for the US over the same sample. It turns out that US real time estimates are much more correlated with final estimates than for the euro area, data revisions play a larger role, but overall the unreliability in real time of the US output gap measures detected in earlier studies is confirmed in the more recent period. Moreover, despite some difference across output gap estimates and forecast horizons, the results point clearly to a lack of any usefulness of real-time output gap estimates for inflation forecasting both in the short term (one-quarter and one-year ahead) and the medium term (two-year and three-year ahead). By contrast, some evidence is provided indicating that several output gap estimates are useful to forecast real GDP growth, particularly in the short term, and some appear also useful in the medium run. No single output gap measure appears superior to all others in all respects. JEL Classification: E31, E37, E52, E58. |
Keywords: | Output gap, real-time data, euro area, inflation forecasts, real GDP forecasts, data revisions. |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20101157&r=ecm |