|
on Econometrics |
By: | Peter C.B. Phillips (Cowles Foundation, Yale University); Degui Li (University of York); Jiti Gao (The University of Adelaide and Monash University) |
Abstract: | This paper studies nonlinear cointegration models in which the structural coefficients may evolve smoothly over time. These time-varying coefficient functions are well-suited to many practical applications and can be estimated conveniently by nonparametric kernel methods. It is shown that the usual asymptotic methods of kernel estimation completely break down in this setting when the functional coefficients are multivariate. The reason for this breakdown is a kernel-induced degeneracy in the weighted signal matrix associated with the nonstationary regressors, a new phenomenon in the kernel regression literature. Some new techniques are developed to address the degeneracy and resolve the asymptotics, using a path-dependent local coordinate transformation to re-orient coordinates and accommodate the degeneracy. The resulting asymptotic theory is fundamentally different from the existing kernel literature, giving two different limit distributions with different convergence rates in the different directions (or combinations) of the (functional) parameter space. Both rates are faster than the usual (?nh) rate for nonlinear models with smoothly changing coefficients and local stationarity. Hence two types of super-consistency apply in nonparametric kernel estimation of time-varying coefficient cointegration models. The higher rate of convergence (n?h) lies in the direction of the nonstationary regressor vector at the local coordinate point. The lower rate (nh) lies in the degenerate directions but is still super-consistent for nonparametric estimators. In addition, local linear methods are used to reduce asymptotic bias and a fully modified kernel regression method is proposed to deal with the general endogenous nonstationary regressor case. Simulations are conducted to explore the finite sample properties of the methods and a practical application is given to examine time varying empirical relationships involving consumption, disposable income, investment and real interest rates. |
Keywords: | Cointegration, Endogeneity, Kernel degeneracy, Nonparametric regression, Super-consistency, Time varying coefficients |
JEL: | C13 C14 C32 |
Date: | 2013–09 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1910&r=ecm |
By: | Jiti Gao (The University of Adelaide and Monash University); Peter C.B. Phillips (Cowles Foundation, Yale University) |
Abstract: | This paper studies a general class of nonlinear varying coefficient time series models with possible nonstationarity in both the regressors and the varying coffiecient components. The model accommodates a cointegrating structure and allows for endogeneity with contemporaneous correlation among the regressors, the varying coefficient drivers, and the residuals. This framework allows for a mixture of stationary and non-stationary data and is well suited to a variety of models that are commonly used in applied econometric work. Nonparametric and semiparametric estimation methods are proposed to estimate the varying coefficient functions. The analytical findings reveal some important differences, including convergence rates, that can arise in the conduct of semiparametric regression with nonstationary data. The results include some new asymptotic theory for nonlinear functionals of nonstationary and stationary time series that are of wider interest and applicability and subsume much earlier research on such systems. The finite sample properties of the proposed econometric methods are analyzed in simulations. An empirical illustration examines nonlinear dependencies in aggregate consumption function behavior in the US over the period 1960-2009. |
Keywords: | Aggregate consumption, Asymptotic theory, Cointegration, Density, Local time, Nonlinear functional, Nonparametric estimation, Semiparametric, Time series, Varying coefficient model |
JEL: | C13 C14 C23 |
Date: | 2013–09 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1911&r=ecm |
By: | Granziera, Eleonora; Hubrich, Kirstin; Moon, Hyungsik Roger |
Abstract: | In this paper we introduce Quasi Likelihood Ratio tests for one sided multivariate hypotheses to evaluate the null that a parsimonious model performs equally well as a small number of models which nest the benchmark. We show that the limiting distributions of the test statistics are non standard. For critical values we consider two approaches: (i) bootstrapping and (ii) simulations assuming normality of the mean square prediction error (MSPE) difference. The size and the power performance of the tests are compared via Monte Carlo experiments with existing equal and superior predictive ability tests for multiple model comparison. We find that our proposed tests are well sized for one step ahead as well as for multi-step ahead forecasts when critical values are bootstrapped. The experiments on the power reveal that the superior predictive ability test performs last while the ranking between the quasi likelihood-ratio test and the other equal predictive ability tests depends on the simulation settings. Last, we apply our test to draw conclusions about the predictive ability of a Phillips type curve for the US core inflation. JEL Classification: |
Keywords: | direct multi-step forecasts, fixed regressors bootstrap., multi-model comparison, Out-of sample, point-forecast evaluation, predictive ability |
Date: | 2013–08 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131580&r=ecm |
By: | Ulrich Hounyo (Oxford-Man Institute of Quantitative Finance and CREATES); Sílvia Goncalves (Département de sciences économiques, CIREQ and CIRANO, Université de Montréal); Nour Meddahi (Toulouse School of Economics) |
Abstract: | The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure that combines the wild bootstrap with the blocks of blocks bootstrap. We provide a proof of the first order asymptotic validity of this method for percentile intervals. Our Monte Carlo simulations show that the wild blocks of blocks bootstrap improves the finite sample properties of the existing first order asymptotic theory. We use empirical work to illustrate its use in practice. |
Keywords: | High frequency data, realized volatility, pre-averaging, market microstructure noise, wild bootstrap, block bootstrap |
JEL: | C15 C22 C58 |
Date: | 2013–08–29 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2013-28&r=ecm |
By: | Mariano Kulish (University of New South Wales); Adrian Pagan (University of Sydney) |
Abstract: | Structural change has been conjectured to lead to an upward bias in the estimated forward expectations coefficient in New-Keynesian Phillips curves. We present a simple New-Keynesian model that enables us to assess this proposition. In particular, we investigate the issue of upward bias in the estimated coefficients of the expectations variable in the New-Keynesian Phillips curve based on a model where we can see what causes the structural breaks and how to control for them. We find that structural breaks in the means of the series can often change the properties of instruments a great deal, and may well be a bigger source of small-sample bias than that due to specification error. Moreover, we also find that the direction of the specification bias is not predictable. It is necessary to check for weak instruments before deciding that the magnitude of any estimator bias reflects specification errors coming from structural change. |
Keywords: | expectations; structural change; regime change; weak instruments; IV estimation; Phillips curves |
JEL: | C13 C32 C63 E52 |
Date: | 2013–09 |
URL: | http://d.repec.org/n?u=RePEc:rba:rbardp:rdp2013-11&r=ecm |
By: | Haiqing Xu (Department of Economics, Texas University) |
Abstract: | This paper focuses on the identification and estimation of static games of incomplete information with correlated types. Instead of making the (conditional) independence assumption on players' types to simplify the equilibria set, I establish a method that allows to identify subsets of the space of covariates (i.e. publicly observed state variables in payoff functions), for which there exists a unique Bayesian Nash Equilibrium (BNE) and the equilibrium strategies are monotone functions. The unique monotone pure strategy BNE can be characterized in a simple manner, based on which I propose an estimation procedure exploiting the information contained in the subset of the covariate space, and establish the consistency and the limiting distribution of the estimator. |
Keywords: | Incomplete Information Game, Monotone Pure Strategy BNE, Maximum Likelihood Estimation |
JEL: | C35 C62 C72 |
URL: | http://d.repec.org/n?u=RePEc:tex:wpaper:130909&r=ecm |
By: | Román Mínguez; María Durbán; José María Montero; Dae-Jin Lee |
Abstract: | In this work we propose the combination of P-splines with traditional spatial econometric models in such a way that it allows for their representation as a mixed model. The advantages of combining these models include: (i) dealing with complex non-linear and non-separable trends, (ii) estimating short-range spatial correlation together with the large-scale spatial trend, (iii) decomposing the systematic spatial variation into those two components and (iv) estimating the smoothing parameters included in the penalized splines together with the other parameters of the model. The performance of the proposed spatial non-parametric models is checked by both simulation and a empirical study. More specifically, we simulate 3,600 datasets generated by those models (with both linear and non-linear-non-separable global spatial trends). As for the empirical case, we use the well-known Lucas county data on housing prices. Our results indicate that the proposed models have a better performance than the traditional spatial strategies, specially in the presence of nonlinear trend |
Keywords: | Global spatial trend, Mixed models, P-splines, PS-SAR, PS-SEM |
JEL: | C14 C15 C21 |
Date: | 2013–09 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:ws132925&r=ecm |
By: | Fadaei Oshyani, Masoud (KTH); Sundberg, Marcus (KTH); Karlström, Anders (KTH) |
Abstract: | GPS and nomad devices are increasingly used to provide data from individuals in urban traffic networks. In many different applications, it is important to predict the continuation of an observed path, and also, given sparse data, predict where the individual (or vehicle) has been. Estimating the perceived cost functions is a difficult statistical estimation problem, for different reasons. First, the choice set is typically very large. Second, it may be important to take into account the correlation between the (generalized) costs of different routes, and thus allow for realistic substitution patterns. Third, due to technical or privacy considerations, the data may be temporally and spatially sparse, with only partially observed paths. Finally, the position of vehicles may have measurement errors. We address all these problems using a indirect inference approach. We demonstrate the feasibility of the proposed estimator in a model with random link costs, allowing for a natural correlation structure across paths, where the full choice set is considered. |
Keywords: | GPS; Route choice model; Indirect inference; Sparse data; Statistical estimation problem. |
JEL: | R40 |
Date: | 2013–09–16 |
URL: | http://d.repec.org/n?u=RePEc:hhs:ctswps:2013_011&r=ecm |
By: | Roger E.A. Farmer; Vadim Khramov |
Abstract: | We propose a method for solving and estimating linear rational expectations models that exhibit indeterminacy and we provide step-by-step guidelines for implementing this method in the Matlab-based packages Dynare and Gensys. Our method redefines a subset of expectational errors as new fundamentals. This redefinition allows us to treat indeterminate models as determinate and to apply standard solution algorithms. We provide a selection method, based on Bayesian model comparison, to decide which errors to pick as fundamental and we present simulation results to show how our procedure works in practice. |
JEL: | C11 C13 C54 |
Date: | 2013–09 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:19457&r=ecm |
By: | Díaz-Emparanza Herrero, Ignacio; Moral Zuazo, María Paz |
Abstract: | The seasonal stability tests of Canova & Hansen (1995) (CH) provide a method complementary to that of Hylleberg et al. (1990) for testing for seasonal unit roots. But the distribution of the CH tests are unknown in small samples. We present a method to numerically compute critical values and P-values for the CH tests for any sample size and any seasonal periodicity. In fact this method is applicable to the types of seasonality which are commonly in use, but also to any other. |
Keywords: | unit roots, surface response analysis, seasonality, Canova-Hansen |
JEL: | C22 C63 C87 |
URL: | http://d.repec.org/n?u=RePEc:ehu:biltok:10577&r=ecm |
By: | James J. Heckman (University of Chicago); Rodrigo Pinto (University of Chicago) |
Abstract: | Haavelmo's seminal 1943 paper is the first rigorous treatment of causality. In it, he distinguished the definition of causal parameters from their identification. He showed that causal parameters are defined using hypothetical models that assign variation to some of the inputs determining outcomes while holding all other inputs fixed. He thus formalized and made operational Marshall's (1890) ceteris paribus analysis. We embed Haavelmo's framework into the recursive framework of Directed Acyclic Graphs (DAG) used in one influential recent approach to causality (Pearl, 2000) and in the related literature on Bayesian nets (Lauritzen, 1996). We compare an approach based on Haavelmo's methodology with a standard approach in the causal literature of DAGs-- the "do-calculus" of Pearl (2009). We discuss the limitations of DAGs and in particular of the do-calculus of Pearl in securing identification of economic models. We extend our framework to consider models for simultaneous causality, a central contribution of Haavelmo (1944). In general cases, DAGs cannot be used to analyze models for simultaneous causality, but Haavelmo's approach naturally generalizes to cover it. |
Keywords: | Causality, Identification, Do-Calculus, Directed Acyclic Graphs, Simultaneous Treatment Effects |
JEL: | C10 C18 |
Date: | 2013–09 |
URL: | http://d.repec.org/n?u=RePEc:hka:wpaper:2013-008&r=ecm |
By: | Richard A. Ashley; Christopher F. Parmeter |
Abstract: | Credible inference requires attention to the possible fragility of the results (p-values for key hypothesis tests) to flaws in the model assumptions, notably including the validity of the instruments used. Past sensitivity analysis has mainly consisted of experimentation with alternative model specifications and with tests of over-identifying restrictions. We provide a feasible sensitivity analysis of two-stage least squares estimation, quantifying the fragility/robustness of inference with respect to possible flaws in the exogeneity assumptions made, and also indicating which of these assumptions are most crucial. The method is illustrated with an empirical application focusing on the education-earnings relationship. |
Keywords: | Robustness, invalid instruments, flawed instruments, instrumental variables, sensitivity analysis, two-stage least squares. |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:vpi:wpaper:e07-37&r=ecm |
By: | Joshua D. Angrist; Òscar Jordà; Guido Kuersteiner |
Abstract: | We develop a flexible semiparametric time series estimator that is then used to assess the causal effect of monetary policy interventions on macroeconomic aggregates. Our estimator captures the average causal response to discrete policy interventions in a macro-dynamic setting, without the need for assumptions about the process generating macroeconomic outcomes. The proposed procedure, based on propensity score weighting, easily accommodates asymmetric and nonlinear responses. Application of this estimator to the effects of monetary restraint suggest contractionary policy slows real economic activity. By contrast, the Federal Reserve's ability to stimulate real economic activity through monetary expansion appears to be much more limited. Estimates for recent financial crisis years are similar to those for the earlier, pre-crisis period. |
JEL: | C32 C54 E52 E58 E65 |
Date: | 2013–08 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:19355&r=ecm |
By: | Jonathan Bartlett (London School of Hygiene and Tropical Medicine) |
Abstract: | Multiple imputation (MI) is a popular approach to handling missing data, and an extensive range of MI commands is now available in official Stata. A common problem is that of missing values in covariates of regression models. When the substantive model for the outcome contains nonlinear covariate effects or interactions, correctly specifying an imputation model for covariates becomes problematic. We present simulation results illustrating the biases that can occur when standard imputation models are used to impute covariates in linear regression models with a quadratic effect or interaction effect. We then describe a modification of the full conditional specification (FCS) or chained equations approach to MI, which ensures that covariates are imputed from a model which is compatible with a user-specified substantive model. We present the smcfcs Stata command, which implements substantive model compatible FCS and illustrate its application to a dataset. |
Date: | 2013–09–16 |
URL: | http://d.repec.org/n?u=RePEc:boc:usug13:03&r=ecm |
By: | Jean Dubé (Université du Québec à Rimouski); Diègo Legros (LEG/AMIE - CNRS FRE 3496 - Université de Bourgogne) |
Abstract: | (VF)L'article s'attache à présenter la particularité des données spatiales empilées et montre pourquoi il peut être contre-indiqué d'utiliser les approches spatiales développées pour les données en coupe transversale empilées dans le temps. La dimension temps implique des relations unidirectionnelles, par opposition aux relations multidirectionnelles spatiales. La construction d'une matrice de pondérations spatio-temporelles unique, à partir de matrices de pondérations spatiales et temporelles, permet d'utiliser les modèles et les tests développés pour les données spatiales tout en tenant compte des deux dimensions simultanément. Une série d'applications empiriques montre que la non prise en compte de la dimension temporelle dans les analyses a pour conséquence de surévaluer les mesures de la dépendance spatiale en plus de surévaluer les coefficients autorégressifs spatiaux estimés. Finalement, la prise en compte des deux dimensions, spatiales et temporelles, permet de générer de nouvelles variables explicatives dynamiques, comparables à des effets de pairs, qui s'avèrent significatives dans l’explication des prix de vente immobiliers. (VA)This paper presents the characteristics of spatial data pooled over time and show why these data bases cannot be considered as the same way as spatial panel data or strictly spatial data. The temporal dimension implies a unidirectionality of relations, while spatial relations are multidirectional. The construction of spatio-temporal weights matrix, lying on spatial and temporal weights matrices, allow to use usual statistic models and tests developed for spatial analysis while accounting simultaneously for temporal and spatial dimensions. Empirical examples established the impact of neglecting the temporal dimension in spatial analysis and show how such approach overestimate the pattern of spatial dependence as well as overestimate the spatial autoregressive coefficient estimated. Finally, accounting for both dimensions, spatial and temporal, allow to generate additional independent variables, considering dynamic effect, that appear to play a significant role on determination of real estate prices. |
Keywords: | (VF)Effets frontières ; Blocs régionaux ; Autocorrélation spatiale ; Afrique Sub-Saharienne. (VA)Border effects; Regional blocs; Spatial Autocorrelation; Sub-Saharan Africa. |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:lat:legeco:e2013-01&r=ecm |
By: | Maria Cipollina; Luca Salvatici; Luca De Benedictis; Claudio Vicarelli |
Abstract: | The use of the gravity model to evaluate the effect of policies in a cross-country framework is largely predominant in the international economics empirical literature. This literature usually implements importer and exporter fixed effects to account for the theoretical Multilateral Trade Resistances, while preferential trade policies are approximated through the use of dummy variables. Results from a Monte Carlo experiment confirms that the identification of trade policy im pact using a gravity equation including fixed effects is severely limited. Moreover, the consequences of the error in measurement of the policy variable are magnified by the fixed effects control for unobserved heterogeneity. |
Keywords: | Gravity model, Policy evaluation, Monte Carlo A nalysis |
JEL: | C13 C14 F10 F43 |
Date: | 2013–09 |
URL: | http://d.repec.org/n?u=RePEc:rtr:wpaper:0180&r=ecm |
By: | Òscar Jordà; Alan M. Taylor |
Abstract: | Elevated government debt levels in advanced economies have risen rapidly as sovereigns absorbed private-sector losses and cyclical deficits blew up in the Global Financial Crisis and subsequent slump. A rush to fiscal austerity followed but its justifications and impacts have been heavily debated. Research on the effects of austerity on macroeconomic aggregates remains unsettled, mired by the difficulty of identifying multipliers from observational data. This paper reconciles seemingly disparate estimates of multipliers within a unified framework. We do this by first evaluating the validity of common identification assumptions used by the literature and find that they are largely violated in the data. Next, we use new propensity score methods for time-series data with local projections to quantify how contractionary austerity really is, especially in economies operating below potential. We find that the adverse effects of austerity may have been understated. |
JEL: | C54 C99 E32 E62 H20 H5 N10 |
Date: | 2013–09 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:19414&r=ecm |
By: | Ray C. Fair (Cowles Foundation, Yale University) |
Abstract: | I have been doing research in macroeconomics since the late 1960s, almost 50 years. In this paper I pause and take stock. The paper is part personal reflections on macroeconometric modeling, part a road map of the techniques of macroeconometric modeling, and part comments on what I think I have learned about how the macroeconomy works from my research in this area. |
Keywords: | Macroeconometric modeling |
JEL: | E10 |
Date: | 2013–09 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1908&r=ecm |