|
on Econometrics |
By: | Mohitosh Kejriwal; Pierre Perron |
Abstract: | Perron and Yabu (2008) consider the problem of testing for a break occuring at an unknown date in the trend function of a univariate time series when the noise component can be either stationary or integrated. This paper extends their work by proposing a sequential test that allows one to test the null hypothesis of, say, l breaks, versus the alternative hypothesis of (l + 1) breaks. The test enables consistent estimation of the number of breaks. In both stationary and integrated cases, it is shown that asymptotic critical values can be obtained from the relevant quantiles of the limit distribution of the test for a single break. Monte Carlo simulations suggest that the procedure works well in finite samples. |
Keywords: | structural change, sequential procedure, feasible gls, unit root, structural breaks |
JEL: | C22 |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:pur:prukra:1217&r=ecm |
By: | Malik, Sheheryar (Department of Economics, University of Warwick,); Pitt, Michael K (Department of Economics, University of Warwick,) |
Abstract: | In this paper we provide a unified methodology in order to conduct likelihood-based inference on the unknown parameters of a general class of discrete-time stochastic volatility models, characterized by both a leverage e®ect and jumps in returns. Given the non-linear/non-Gaussian state-space form, approximating the likelihood for the parameters is conducted with output generated by the particle filter. Methods are employed to ensure that the approximating likelihood is continuous as a function of the unknown parameters thus enabling the use of Newton-Raphson type maximization algorithms. Our approach is robust and efficient relative to alternative Markov Chain Monte Carlo schemes employed in such contexts. In addition it provides a feasible basis for undertaking the non-trivial task of model comparison. The technique is applied to daily returns data for various stock price indices. We find strong evidence in favour of a leverage effect in all cases. Jumps are an important component in two out of the four series we consider. |
Keywords: | Particle filter ; Simulation ; SIR ; State space ; Leverage effect ; Jumps |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:wrk:warwec:897&r=ecm |
By: | Conniffe, Denis (National University of Ireland, Maynooth); O'Neill, Donal (National University of Ireland, Maynooth) |
Abstract: | A common approach to dealing with missing data is to estimate the model on the common subset of data, by necessity throwing away potentially useful data. We derive a new probit type estimator for models with missing covariate data where the dependent variable is binary. For the benchmark case of conditional multinormality we show that our estimator is efficient and provide exact formulae for its asymptotic variance. Simulation results show that our estimator outperforms popular alternatives and is robust to departures from the benchmark case. We illustrate our estimator by examining the portfolio allocation decision of Italian households. |
Keywords: | missing data, probit model, portfolio allocation, risk aversion |
JEL: | C25 G11 |
Date: | 2009–03 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp4081&r=ecm |
By: | Olivier Ledoit; Sandrine Péché |
Abstract: | We consider sample covariance matrices constructed from real or complex i.i.d. variates with finite 12th moment. We assume that the population covariance matrix is positive definite and its spectral measure almost surely converges to some limiting probability distribution as the number of variables and the number of observations go to infinity together, with their ratio converging to a finite positive limit. We quantify the relationship between sample and population eigenvectors, by studying the asymptotics of a broad family of functionals that generalizes the Stieltjes transform of the spectral measure. This is then used to compute the asymptotically optimal bias correction for sample eigenvalues, paving the way for a new generation of improved estimators of the covariance matrix and its inverse. |
Keywords: | Asymptotic distribution, bias correction, eigenvectors and eigenvalues, principal component analysis, random matrix theory, sample covariance matrix, shrinkage estimator, Stieltjes transform. |
JEL: | C13 |
Date: | 2009–03 |
URL: | http://d.repec.org/n?u=RePEc:zur:iewwpx:407&r=ecm |
By: | Ferroni, Filippo |
Abstract: | DSGE models are currently estimated with a two step approach: data is first filtered and then DSGE structural parameters are estimated. Two step procedures have problems, ranging from trend misspecification to wrong assumption about the correlation between trend and cycles. In this paper, I present a one step method, where DSGE structural parameters are jointly estimated with filtering parameters. I show that different data transformations imply different structural estimates; the two step approach lacks a statistical-based criterion to select among them. The one step approach allows to test hypothesis about the most likely trend specification for individual series and/or use the resulting information to construct robust estimates by Bayesian averaging. The role of investment shock as source of GDP volatility is reconsidered. |
Keywords: | DSGE models; Filters; Structural estimation; Business Cycles |
JEL: | C32 E32 C11 |
Date: | 2009–04–04 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:14550&r=ecm |
By: | Ringle, Christian M.; Götz, Oliver; Wetzels, Martin; Wilson, Bradley (METEOR) |
Abstract: | The broader goal of this paper is to provide social researchers with some analytical guidelines when investigating structural equation models (SEM) with predominantly a formative specification. This research is the first to investigate the robustness and precision of parameter estimates of a formative SEM specification. Two distinctive scenarios (normal and non-normal data scenarios) are compared with the aid of a Monte Carlo simulation study for various covariance-based structural equation modeling (CBSEM) estimators and various partial least squares path modeling (PLS-PM) weighting schemes. Thus, this research is also one of the first to compare CBSEM and PLS-PM within the same simulation study. We establish that the maximum likelihood (ML) covariance-based discrepancy function provides accurate and robust parameter estimates for the formative SEM model under investigation when the methodological assumptions are met (e.g., adequate sample size, distributional assumptions, etc.). Under these conditions, ML-CBSEM outperforms PLS-PM. We also demonstrate that the accuracy and robustness of CBSEM decreases considerably when methodological requirements are violated, whereas PLS-PM results remain comparatively robust, e.g. irrespective of the data distribution. These findings are important for researchers and practitioners when having to choose between CBSEM and PLS-PM methodologies to estimate formative SEM in their particular research situation. |
Keywords: | marketing ; |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:dgr:umamet:2009014&r=ecm |
By: | Abadie, Alberto (Harvard University); Imbens, Guido W. (Harvard University) |
Abstract: | Matching estimators are widely used in statistical data analysis. However, the distribution of matching estimators has been derived only for particular cases (Abadie and Imbens, 2006). This article establishes a martingale representation for matching estimators. This representation allows the use of martingale limit theorems to derive the asymptotic distribution of matching estimators. As an illustration of the applicability of the theory, we derive the asymptotic distribution of a matching estimator when matching is carried out without replacement, a result previously unavailable in the literature. |
Keywords: | matching, martingales, treatment effects, hot-deck imputation |
JEL: | C13 C14 C21 |
Date: | 2009–03 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp4073&r=ecm |
By: | Henderson, Daniel J. (Binghamton University, New York); Parmeter, Christopher F. (Virginia Tech) |
Abstract: | Economic conditions such as convexity, homogeneity, homotheticity, and monotonicity are all important assumptions or consequences of assumptions of economic functionals to be estimated. Recent research has seen a renewed interest in imposing constraints in nonparametric regression. We survey the available methods in the literature, discuss the challenges that present themselves when empirically implementing these methods and extend an existing method to handle general nonlinear constraints. A heuristic discussion on the empirical implementation for methods that use sequential quadratic programming is provided for the reader and simulated and empirical evidence on the distinction between constrained and unconstrained nonparametric regression surfaces is covered. |
Keywords: | constraint weighted bootstrapping, Hessian, concavity, identification, earnings function |
JEL: | J20 J30 C14 |
Date: | 2009–03 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp4103&r=ecm |
By: | Valerien O. Pede (Raymond J.G.M. Florax; Matthew T. Holt; Department of Agricultural Economics, Purdue University, West Lafayette, IN) |
Abstract: | Spatial regression models incorporating non-stationarity in the regression coefficients are popular. We propose a spatial variant of the Smooth Transition AutoRegressive (STAR) model that is more parsimonious than commonly used approaches and endogenously determines the extent of spatial parameter variation. Uncomplicated estimation and inference procedures are demonstrated using a neoclassical convergence model for United States counties. |
Keywords: | spatial autoregression, smooth transition, spatial econometrics, STAR, GWR |
JEL: | C21 C51 R11 R12 |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:pae:wpaper:09-03&r=ecm |
By: | Frank Schorfheide; Keith Sill; Maxym Kryshko |
Abstract: | This paper develops and illustrates a simple method to generate a DSGE model-based forecast for variables that do not explicitly appear in the model (non-core variables). We use auxiliary regressions that resemble measurement equations in a dynamic factor model to link the non-core variables to the state variables of the DSGE model. Predictions for the non-core variables are obtained by applying their measurement equations to DSGE model-generated forecasts of the state variables. Using a medium-scale New Keynesian DSGE model, we apply our approach to generate and evaluate recursive forecasts for PCE inflation, core PCE inflation, the unemployment rate, and housing starts along with predictions for the seven variables that have been used to estimate the DSGE model. |
JEL: | C11 C32 C53 E27 E47 |
Date: | 2009–04 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:14872&r=ecm |
By: | Jim Malley; Ulrich Woitek |
Abstract: | This paper contributes to the on-going empirical debate regarding the role of the RBC model and in particular of technology shocks in explaining aggregate fluctuations. To this end we estimate the model’s posterior density using Markov-Chain Monte-Carlo (MCMC) methods. Within this framework we extend Ireland’s (2001, 2004) hybrid estimation approach to allow for a vector autoregressive moving average (VARMA) process to describe the movements and co-movements of the model’s errors not explained by the basic RBC model. The results of marginal likelihood ratio tests reveal that the more general model of the errors significantly improves the model’s fit relative to the VAR and AR alternatives. Moreover, despite setting the RBC model a more difficult task under the VARMA specification, our analysis, based on forecast error and spectral decompositions, suggests that the RBC model is still capable of explaining a significant fraction of the observed variation in macroeconomic aggregates in the post-war U.S. economy. |
Keywords: | Real Business Cycle, Bayesian estimation, VARMA errors |
JEL: | C11 C52 E32 |
Date: | 2009–04 |
URL: | http://d.repec.org/n?u=RePEc:zur:iewwpx:408&r=ecm |
By: | Gottlieb, Daniel; Kushnir, Leonid |
Abstract: | In this paper we develop a methodology for identifying a population group surveyed latently in the (target) survey relevant for further processing, for example poverty calculations, but surveyed explicitly in another (source) survey, not suitable for such processing. Identification is achieved by transferring the binary information from the source survey to the target survey by means of a logistic regression determining group affiliation in the source survey by use of variables available also in the target survey. In the proposed methodology we improve on common matching procedures by optimizing the cut-value of the probability which assigns group affiliation in the target survey. This contrasts with the commonly used "Hosmer-Lemeshov" cut-values for binary categorization, which equates between the sensitivity and specificity curves. Instead we improve group identification by minimizing the sum of total errors as a percent of total true outcomes. The Jewish ultra-orthodox population in Israel serves as a case study. This idiosyncratic community, committed to the observance of the Bible is only latently observed in the surveys typically used for poverty calculation. It is explicitly captured in the social survey, which is not suitable for poverty measurement. This procedure is useful for ex-post enhancement of survey data in general. |
Keywords: | Group identification, binary variables, optimal cutoff value, poverty, targeting |
JEL: | C15 D63 I38 Z12 |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:zbw:ifwedp:7543&r=ecm |
By: | Mohitosh Kejriwal |
Abstract: | Recent empirical studies find little evidence of a change in euro area inflation persistence over the post-1970 period. Their methodology is primarily based on standard unit root and structural break tests on the persistence parameter in an autoregressive specification for the inflation process. These procedures are, however, not designed to detect a change in persistence when a sub-sample of the data has a unit root, i.e., when the process shifts from stationarity to non-stationarity or vice-versa. In this paper, we use four classes of tests for a change in persistence that allow for such shifts to argue that euro area inflation shifted from a unit root process to a stationary one at some point in the sample. Statistical methods to select the break date identify the change in the second quarter of 1993, around the time of the Maastricht Treaty which established the groundwork for the European Monetary Union, with an explicit mandate for price stability as the primary objective of monetary policy. Bootstrap estimates of the persistence parameter, half-life estimates and confidence intervals for the largest autoregressive root all suggest a marked decline in persistence after the break. We also illustrate that the hypothesis of stationarity with a mean shift but a stable persistence parameter is not compatible with the data. The evidence presented is therefore consistent with the view that the degree of inflation persistence varies with the transparency and credibility of the monetary regime. |
Keywords: | persistence, price stability, unit root, monetary policy |
JEL: | C22 E3 E5 |
Date: | 2009–03 |
URL: | http://d.repec.org/n?u=RePEc:pur:prukra:1218&r=ecm |
By: | Bryan S. Graham; Guido W. Imbens; Geert Ridder |
Abstract: | This paper presents methods for evaluating the effects of reallocating an indivisible input across production units, taking into account resource constraints by keeping the marginal distribution of the input fixed. When the production technology is nonseparable, such reallocations, although leaving the marginal distribution of the reallocated input unchanged by construction, may nonetheless alter average output. Examples include reallocations of teachers across classrooms composed of students of varying mean ability. We focus on the effects of reallocating one input, while holding the assignment of another, potentially complementary, input fixed. We introduce a class of such reallocations -- correlated matching rules -- that includes the status quo allocation, a random allocation, and both the perfect positive and negative assortative matching allocations as special cases. We also characterize the effects of local (relative to the status quo) reallocations. For estimation we use a two-step approach. In the first step we nonparametrically estimate the production function. In the second step we average the estimated production function over the distribution of inputs induced by the new assignment rule. These methods build upon the partial mean literature, but require extensions involving boundary issues. We derive the large sample properties of our proposed estimators and assess their small sample properties via a limited set of Monte Carlo experiments. |
JEL: | C14 C21 C52 |
Date: | 2009–04 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:14860&r=ecm |
By: | George A. Christodoulakis (Manchester Business School, University of Manchester); Emmanuel C. Mamatzakis (Department of Economics, University of Macedonia) |
Abstract: | This paper focuses on labour market dynamics in the EU 15 using Markov Chains for proportions of aggregate data for the first time in this literature. We apply a Bayesian approach, which employs a Monte Carlo Integration procedure that uncovers the entire empirical posterior distribution of transition probabilities from full employment to part employment, temporary employment and unemployment and vice a versa. Thus, statistical inferences are readily available. Our results show that there are substantial variations in the transition probabilities across countries, implying that the convergence of the EU-15 labour markets is far from completed. However, some common patterns are observed as countries with flexible labour markets exhibit similar transition probabilities between different states of the labour market. |
Keywords: | Employment, Unemployment, Markov Chains. |
JEL: | C53 E24 E27 E37 |
Date: | 2009–04 |
URL: | http://d.repec.org/n?u=RePEc:mcd:mcddps:2009_07&r=ecm |
By: | Ricardo Gimeno (Banco de España); José Manuel Marqués (Banco de España) |
Abstract: | In this paper we propose an affine model that uses as observed factors the Nelson and Siegel (NS) components summarising the term structure of interest rates. By doing so, we are able to reformulate the Diebold and Li (2006) approach to forecast the yield curve in a way that allows us to incorporate a non-arbitrage opportunities condition and risk aversion into the model. These conditions seem to improve the forecasting ability of the term structure components and provide us with an estimation of the risk premia. Our approach is somewhat equivalent to the recent contribution of Christiensen, Diebold and Rudebusch (2008). However, not only does it seem to be more intuitive and far easier to estimate, it also improves that model in terms of fitting and forecasting properties. Moreover, with this framework it is possible to incorporate directly the inflation rate as an additional factor without reducing the forecasting ability of the model. The augmented model produces an estimation of market expectations about inflation free of liquidity, counterparty and term premia. We provide a comparison of the properties of this indicator with others usually employed to proxy the inflation expectations, such as the break-even rate, inflation swaps and professional surveys. |
Keywords: | Interest Rate Forecast, Inflation Expectations, Affine Model, Diebold and Li |
JEL: | G12 E43 E44 C53 |
Date: | 2009–04 |
URL: | http://d.repec.org/n?u=RePEc:bde:wpaper:0906&r=ecm |
By: | Todd E. Elder; John H. Goddeeris; Steven J. Haider |
Abstract: | We analyze four methods to measure unexplained gaps in mean outcomes: three decompositions based on the seminal work of Oaxaca (1973) and Blinder (1973) and an approach involving a seemingly naïve regression that includes a group indicator variable. Our analysis yields two principal findings. We first show that a commonlyused pooling decomposition systematically overstates the contribution of observable characteristics to mean outcome differences, therefore understating unexplained differences. We also show that the coefficient on a group indicator variable from an OLS regression is an attractive approach for obtaining a single measure of the unexplained gap. We then provide three empirical examples that explore the practical importance of our analytic results. |
Keywords: | decompositions, discrimination |
JEL: | J24 J31 J15 J16 |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:auu:dpaper:600&r=ecm |