|
on Econometrics |
By: | Patrick Richard (GREDI, Département d'économique, Université de Sherbrooke) |
Abstract: | We study the problems of bias correction in the estimation of low order ARMA(p, q) time series models. We introduce a new method to estimate the bias of the parameters of ARMA(p, q) process based on the analytical form of the GLS transformation matrix of Galbraith and Zinde-Walsh (1992). We show that the resulting bias corrected estimator is consistent and asymptotically normal. We also argue that, in the case of an MA(q) model, our method may be considered as an iteration of the analytical indirect inference technique of Galbraith and Zinde-Walsh (1994). The potential of our method is illustrated through a series of Monte Carlo experiments. |
Keywords: | ARMA; bias correction; GLS |
JEL: | C13 C22 |
Date: | 2007 |
URL: | http://d.repec.org/n?u=RePEc:shr:wpaper:07-19&r=ecm |
By: | Luc, BAUWENS (UNIVERSITE CATHOLIQUE DE LOUVAIN, Department of Economics); Fausto Galli (Swiss Finance Institute of Lugano) |
Abstract: | The evaluation of the likelihood function of the stochastic conditional duration model requires to compute an integral that has the dimension of the sample size. We apply the efficient importance sampling method for computing this integral. We compare EIS-based ML estimation with QML estimation based on the Kalman filter. We find that EIS-ML estimation is more precise statistically, at a cost of an acceptable loss of quickness of computations. We illustrate this with simulated and real data. We show also that the EIS-ML method is easy to apply to extensions of the SCD model. |
Keywords: | Stochastic conditional duration, importance sampling |
JEL: | C13 C15 C41 |
Date: | 2007–09–18 |
URL: | http://d.repec.org/n?u=RePEc:ctl:louvec:2007032&r=ecm |
By: | Fabio Nieto; Eliana González |
Abstract: | Testing for unit roots is a common practice in observable stochastic processes and there is abundant literature on this topic. However, sometimes, one is faced with the same problem but in the case where the processes of interest are latent or unobservable. In this paper, empirical distributions of the usual unit-root test statistics are obtained for the trend component of some particular structural models, which are based on optimal predictions (as the observed data) of the trend stochastic process. It is found that these statistical tests tend to be most powerful than the usual Dickey-Fuller tests |
Date: | 2007–09–22 |
URL: | http://d.repec.org/n?u=RePEc:col:000163:004063&r=ecm |
By: | Arnab Bhattacharjee |
Keywords: | Covariate dependence; Continuous covariate; Two-sample tests;Trend tests, Proportional hazards, Frailty, Linear transformation mode |
JEL: | C12 C14 C41 |
URL: | http://d.repec.org/n?u=RePEc:san:wpecon:0708&r=ecm |
By: | Montero Minerva; Guerra Valia |
Abstract: | Montero et al. (2002) proposed a strategy to formulate multilevel models related to a contingency table sample. This methodology is based on the application of the general linear model to hierarchical categorical data. In this paper we applied the method to a multilevel logistic regression model using simulated data. We find that the estimates of the random parameters are inadmissible in some circumstances; large bias and negative estimates of the variance are expected for unbalanced data sets. In order to correct the estimates we propose to use a numerical technique based on the Truncated Singular Value Decomposition (TSVD) in the solution of the problem of generalized least squares associated to the estimation of the random parameters. Finally a simulation study is presented to shows the effectiveness of this technique for reducing the bias of the estimates. |
Date: | 2007–09–22 |
URL: | http://d.repec.org/n?u=RePEc:col:000163:004065&r=ecm |
By: | Simon D. Woodcock (Simon Fraser University); Gary Benedetto (US Census Bureau) |
Abstract: | One approach to limiting disclosure risk in public-use microdata is to release multiply-imputed, partially synthetic data sets. These are data on actual respondents, but with con dential data replaced by multiply-imputed synthetic values. When imputing confidential values, a mis-specified model can invalidate inferences, because the distribution of synthetic data is determined by the model used to generate them. We present a practical method to generate synthetic values when the imputer has only limited information about the true data generating process. We combine a simple imputation model (such as regression) with a series of density-based transformations to pre- serve the distribution of the con dential data, up to sampling error, on speci ed subdomains. We demonstrate through simulation and a large scale application that our approach preserves important statistical properties of the con dential data, including higher moments, with low disclosure risk. |
Keywords: | statistical disclosure limitation, confidentiality, privacy, multiple imputation, partially synthetic data |
JEL: | C1 C4 C5 |
Date: | 2007–09 |
URL: | http://d.repec.org/n?u=RePEc:sfu:sfudps:dp07-15&r=ecm |
By: | Pesaran, M.H.; Assenmacher-Wesche, K. |
Abstract: | We investigate the effect of forecast uncertainty in a cointegrating vector error correction model for Switzerland. Forecast uncertainty is evaluated in three different dimensions. First, we investigate the effect on forecasting performance of averaging over forecasts from different models. Second, we look at different estimation windows. We find that averaging over estimation windows is at least as e¤ective as averaging over different models and both complement each other. Third, we explore whether using weighting schemes from the machine learning literature improves the average forecast. Compared to equal weights the e¤ect of the weighting scheme on forecast accuracy is small in our application. |
Keywords: | Bayesian model averaging, choice of observation window, longrun structural vector autoregression. |
JEL: | C53 C32 |
Date: | 2007–09 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:0746&r=ecm |
By: | A. Colin Cameron; Jonah B. Gelbach; Douglas L. Miller |
Abstract: | Researchers have increasingly realized the need to account for within-group dependence in estimating standard errors of regression parameter estimates. The usual solution is to calculate cluster-robust standard errors that permit heteroskedasticity and within-cluster error correlation, but presume that the number of clusters is large. Standard asymptotic tests can over-reject, however, with few (5-30) clusters. We investigate inference using cluster bootstrap-t procedures that provide asymptotic refinement. These procedures are evaluated using Monte Carlos, including the example of Bertrand, Duflo and Mullainathan (2004). Rejection rates of ten percent using standard methods can be reduced to the nominal size of five percent using our methods. |
JEL: | C12 C15 C21 |
Date: | 2007–09 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberte:0344&r=ecm |
By: | Giulio PALOMBA ([n.a.]); Luca FANELLI (Universit… di Bologna, Dip. di Scienze Statistiche) |
Abstract: | In this paper we propose simulation-based techniques to investigate the finite;sample performance of likelihood ratio (LR) tests for the nonlinear restrictions;that arise when a class of forward-looking (FL) models, typically used in monetary;policy analysis, is evaluated with Vector Autoregressive (VAR) models. We;consider both `one-shot' tests and sequences of tests under a particular form of;adaptive learning dynamics, where `boundedly rational' agents use VARs recursively;to update their beliefs. The analysis is based on the comparison of the likelihood of the unrestricted and restricted VAR, and the p-values associated;with the LR statistics are computed by Monte Carlo simulation. We also address;the case where the variables of the FL model are approximated as non-stationary;cointegrated processes. Application to the New Keynesian Phillips Curve in the;euro area shows that the FL model of inflation dynamics is not rejected once the;suggested simulation-based tests are applied. The result is robust to specification of the VAR as a stationary (albeit highly persistent) or cointegrated system.;However, in the second case the imposition of cointegration restrictions changes;the estimated degree of price stickiness. |
Keywords: | Monte Carlo test, VAR, adaptive learning, cross-equation restrictions, forward-looking model, new Keynesian Phillips curve, simulation techniques |
JEL: | C12 C32 C52 D83 E10 |
Date: | 2007–09 |
URL: | http://d.repec.org/n?u=RePEc:anc:wpaper:298&r=ecm |
By: | chiara tommasi (University of Milano); jesus Lopez Fidalgo (University of Castilla La Mancha (Spain)) |
Abstract: | Designs are found for discriminating between two non-Normal models in the presence of prior information. The KL-optimality criterion, where the true model is assumed to be completely known, is extended to a criterion where prior distributions of the parameters and a prior probability of each model to be true are assumed. Concavity of this criterion is proved. Thus, the results of optimal design theory apply in this context and optimal designs can be constructed and checked by the General Equivalence Theorem. Some illustrative examples are provided. |
Keywords: | KL-optimum designs, discrimination between models, |
Date: | 2007–05–08 |
URL: | http://d.repec.org/n?u=RePEc:bep:unimip:1055&r=ecm |
By: | Michael S. Rendall; Mark S. Handcock; Stefan H. Jonsson |
Abstract: | Previous studies have demonstrated both large efficiency gains and reductions in bias by incorporating population information in regression estimation with sample survey data. These studies, however, assume the population values are exact. This assumption is relaxed here through a Bayesian extension of constrained Maximum Likelihood estimation, applied to 1990s Hispanic fertility. Traditional elements of subjectivity in demographic evaluation and adjustment of survey and population data sources are quantified by this approach, and the inclusion of a larger set of objective data sources is facilitated by it. Compared to estimation from sample survey data only, the Bayesian constrained estimator results in much greater precision in the age pattern of the baseline fertility hazard and, under all but the most extreme assumptions about the uncertainty of the adjusted population data, substantially greater precision about the overall level of the hazard. |
Keywords: | Bayesian Estimation, Human Fertility, Population Forecasting |
JEL: | C11 J13 I1 |
Date: | 2007–05 |
URL: | http://d.repec.org/n?u=RePEc:ran:wpaper:496&r=ecm |
By: | Rodrigo Moreno-Serra |
Abstract: | The general aim of this paper is to review how matching methods try to solve the evaluation problem – with a particular focus on propensity score matching – and their usefulness for the particular case of health programme evaluation. The “classical” case of matching estimation with a single discrete treatment is presented as a basis for discussing recent developments concerning the application of matching methods for jointly evaluating the impact of multiple treatments and for evaluating the impact of a continuous treatment. For each case, I review the treatment effects parameters of interest, the required identification assumptions, the definition of the main matching estimators and their main theoretical properties and practical features. The relevance of the “classical” matching estimators and of their extensions for the multiple and continuous treatments settings is illustrated using the example of a health programme implemented with different levels of population coverage in different geographic areas. |
Keywords: | Evaluation methods, treatment effects, matching, propensity score, programme evaluation. |
JEL: | C14 C21 C33 I10 |
Date: | 2007–02 |
URL: | http://d.repec.org/n?u=RePEc:yor:hectdg:07/02&r=ecm |
By: | Hassani, Hossein |
Abstract: | In recent years Singular Spectrum Analysis (SSA), used as a powerful technique in time series analysis, has been developed and applied to many practical problems. In this paper, the performance of the SSA technique has been considered by applying it to a well-known time series data set, namely, monthly accidental deaths in the USA. The results are compared with those obtained using Box-Jenkins SARIMA models, the ARAR algorithm and the Holt-Winter algorithm (as described in Brockwell and Davis (2002)). The results show that the SSA technique gives a much more accurate forecast than the other methods indicated above. |
Keywords: | ARAR algorithm; Box-Jenkins SARIMA models; Holt-Winter algorithm; singular spectrum analysis (SSA); USA monthly accidental deaths series. |
JEL: | C14 C61 C53 |
Date: | 2007–04–01 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:4991&r=ecm |
By: | Anthony Garratt (School of Economics, Mathematics & Statistics, Birkbeck); Gary Koop; Emi Mise; Shaun P Vahey |
Abstract: | A popular account for the demise of the UK monetary targeting regime in the 1980s blames the weak predictive relationships between broad money and inflation and real output. In this paper, we investigate these relationships using a variety of monetary aggregates which were used as intermediate UK policy targets. We use both real-time and final vintage data and consider a large set of recursively estimated Vector Autoregressive (VAR) and Vector Error Correction models (VECM). These models differ in terms of lag length and the number of cointegrating relationships. Faced with this model uncertainty, we utilize Bayesian model averaging (BMA) and contrast it with a strategy of selecting a single best model. Using the real-time data available to UK policymakers at the time, we demonstrate that the in-sample predictive content of broad money fluctuates throughout the 1980s for both strategies. However, the strategy of choosing a single best model amplifies these fluctuations. Out-of-sample predictive evaluations rarely suggest that money matters for either inflation or real output, regardless of whether we select a single model or do BMA. Overall, we conclude that the money was a weak (and unreliable) predictor for these key macroeconomic variables. But the view that the predictive content of UK broad money diminished during the 1980s receives little support using either the real-time or final vintage data. |
Keywords: | Money, Vector Error Correction Models, Model Uncertainty, Bayesian Model Averaging, Real Time Data |
JEL: | C11 C32 C53 E51 E52 |
Date: | 2007–09 |
URL: | http://d.repec.org/n?u=RePEc:bbk:bbkefp:0714&r=ecm |
By: | Joanne S. Ercolani |
Abstract: | It is undoubtedly desirable that econometric models capture the dynamic behaviour,like trends and cycles, observed in many economic processes. Building models with such capabilities has been an important objective in the continuous time econometrics literature, see for instance the cyclical growth models of Bergstrom (1966), the complete economy-wide macroeconometric models of, for example, Bergstrom and Wymer (1976), unobserved stochastic trends of Harvey and Stock (1988 and 1993) and Bergstrom (1997), and differential-difference equations of Chambers and McGarry (2002). This paper’s contribution is to examine cyclical trends formulated in continuous time, which complement the trend-plus-cycle models that are frequently used in the unobserved components literature. |
Keywords: | Cyclical Trends, continuous time models, stochastic differential equations, differential-difference equations |
JEL: | C22 |
Date: | 2007–09 |
URL: | http://d.repec.org/n?u=RePEc:bir:birmec:07-13&r=ecm |
By: | Ordás Criado, Carlos |
Abstract: | Although panel data have been used intensively by a wealth of studies investigating the GDP-pollution relationship, the poolability assumption used to model these data is almost never addressed. This paper applies a strategy to test the poolability assumption with methods robust to functional misspecification. Nonparametric poolability tests are performed to check the temporal and spatial homogeneity of the panel and their results are compared with the conventional F-tests for a balanced panel of 48 Spanish provinces on four air pollutant emissions (CH4, CO, CO2 and NMVOC) over the 1990-2002 period. We show that temporal homogeneity may allow the pooling of the data and drive to well-defined nonparametric and parametric cross-sectional U-inverted shapes for all air pollutants. However, the presence of spatial heterogeneity makes this shape compatible with different timeseries patterns in every province - mainly increasing or decreasing depending on the pollutant. These results highlight the extreme sensitivity of the income-pollution relationship to region- or country-specific factors. |
Keywords: | Environmental Kuznets Curve; Air pollutants; Non/Semiparametric estimations; Poolability tests |
JEL: | C23 Q53 C14 O40 |
Date: | 2007–08–31 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:5036&r=ecm |
By: | Arnab Bhattacharjee; Madhuchhanda Bhattacharjee |
Keywords: | Bayesian nonparametrics; Nonproportional hazards; Frailty; Age-varying covariate e¤ects; Ageing. |
URL: | http://d.repec.org/n?u=RePEc:san:wpecon:0707&r=ecm |