nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒05‒15
twenty-two papers chosen by
Sune Karlsson
Orebro University

  1. Efficient Estimation of an Additive Quantile Regression By Yebin Cheng; Jan G. De Gooijer; Dawit Zerom
  2. Forecasting with DSGE models By Kai Christoffel; Günter Coenen; Anders Warne
  3. Simulating copula-based distributions and estimating tail probabilities by means of Adaptive Importance Sampling By Marco Bee
  4. Higher Order Improvements for Approximate Estimators By Dennis Kristensen; Bernard Salanie
  5. Gender Wage Gap : A Semi-parametric Approach with Sample Selection Correction By Matteo PICCHIO; Chiara MUSSIDA
  6. A Threshold Stochastic Volatility Model with Realized Volatility By Dinghai Xu
  7. Simulation and Estimation of Loss Given Default By Stefan Hlawatsch; Sebastian Ostrowski
  8. Maximum likelihood estimator for the uneven power distribution: application to DJI returns By Krzysztof Kontek
  9. Spot Variance Path Estimation and its Application to High Frequency Jump Testing By Charles S. Bos; Pawel Janus; Siem Jan Koopman
  10. Do Jumps Matter? Forecasting Multivariate Realized Volatility Allowing for Common Jumps By Yin Liao; Heather Anderson; Farshid Vahid
  11. Forecasting from Mis-specified Models in the Presence of Unanticipated Location Shifts By Michael P. Clements; David F. Hendry
  12. Stable-1/2 Bridges and Insurance: a Bayesian approach to non-life reserving By Edward Hoyle; Lane P. Hughston; Andrea Macrina
  13. Simulation or cohort models? Continuous time simulation and discretized Markov models to estimate cost-effectiveness By Marta O Soares; L Canto e Castro
  14. Characterizing economic trends by Bayesian stochastic model specification search By Grassi, Stefano; Proietti, Tommaso
  15. The Euler-Maruyama approximations for the CEV model By V. Abramov; F. Klebaner; R. Liptser
  16. Modeling Asymmetric Volatility Clusters Using Copulas and High Frequency Data By Cathy Ning, Dinghai Xu, Tony Wirjanto
  17. Return Attribution Analysis of the UK Insurance Portfolios By emmanuel, mamatzakis; george, christodoulakis
  18. Are Some Forecasters Really Better Than Others? By Antonello D’Agostino; Kieran McQuinn; Karl Whelan
  19. Slipping Anchor? Testing the Vignettes Approach to Identification and Correction of Reporting Heterogeneity By Teresa Bago d'Uva; Maarten Lindeboom; Owen O'Donnell; Eddy van Doorslaer
  20. Measuring the Effects of Fiscal Policy By Hafedh Bouakez; Foued Chihi; Michel Normandin
  21. Bounding Preference Parameters under Different Assumptions about Beliefs: a Partial Identification Approach By Charles Bellemare; Luc Bissonnette; Sabine Kröger
  22. Detrending moving average algorithm for multifractals By Gao-Feng Gu; Wei-Xing Zhou

  1. By: Yebin Cheng (Shanghai University of Finance); Jan G. De Gooijer (University of Amsterdam); Dawit Zerom (California State University at Fullerton)
    Abstract: In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By making use of an internally normalized kernel smoother, the proposed estimator reduces the computational requirement of the latter by the order of the sample size. The second estimator involves sequential fitting by univariate local polynomial quantile regressions for each additive component with the other additive components replaced by the corresponding estimates from the first estimator. The purpose of the extra local averaging is to reduce the variance of the first estimator. We show that the second estimator achieves oracle efficiency in the sense that each estimated additive component has the same variance as in the case when all other additive components were known. Asymptotic properties are derived for both estimators under dependent processes that are strictly stationary and absolutely regular. We also provide a demonstrative empirical application of additive quantile models to ambulance travel times using administrative data for the city of Calgary.
    Keywords: Additive models; Asymptotic properties; Dependent data; Internalized kernel smoother; Local polynomial; Oracle efficiency
    JEL: C01 C14
    Date: 2009–11–19
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20090104&r=ecm
  2. By: Kai Christoffel (Directorate General Research, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Günter Coenen (Directorate General Research, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Anders Warne (Directorate General Research, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.)
    Abstract: In this paper we review the methodology of forecasting with log-linearised DSGE models using Bayesian methods. We focus on the estimation of their predictive distributions, with special attention being paid to the mean and the covariance matrix of h-step ahead forecasts. In the empirical analysis, we examine the forecasting performance of the New Area-Wide Model (NAWM) that has been designed for use in the macroeconomic projections at the European Central Bank. The forecast sample covers the period following the introduction of the euro and the out-of-sample performance of the NAWM is compared to nonstructural benchmarks, such as Bayesian vector autoregressions (BVARs). Overall, the empirical evidence indicates that the NAWM compares quite well with the reduced-form models and the results are therefore in line with previous studies. Yet there is scope for improving the NAWM’s forecasting performance. For example, the model is not able to explain the moderation in wage growth over the forecast evaluation period and, therefore, it tends to overestimate nominal wages. As a consequence, both the multivariate point and density forecasts using the log determinant and the log predictive score, respectively, suggest that a large BVAR can outperform the NAWM. JEL Classification: C11, C32, E32, E37.
    Keywords: Bayesian inference, DSGE models, euro area, forecasting, open-economy macroeconomics, vector autoregression.
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20101185&r=ecm
  3. By: Marco Bee
    Abstract: Copulas are an essential tool for the construction of non-standard multivariate probability distributions. In the actuarial and financial field they are particularly important because of their relationship with non-linear dependence and multivariate extreme value theory. In this paper we use a recently proposed generalization of Importance Sampling, called Adaptive Importance Sampling, for simulating copula-based distributions and computing tail probabilities. Unlike existing methods for copula simulation, this algorithm is general, in the sense that it can be used for any copula whose margins have an unbounded support. After working out the details for Archimedean copulas, we develop an extension, based on an appropriate transformation of the marginal distributions, for sampling extreme value copulas. Extensive Monte Carlo experiments show that the method works well and its implementation is simple. An example with equity data illustrates the potential of the algorithm for practical applications
    Keywords: Adaptive Importance Sampling, Copula, Multivariate Extreme Value Theory, Tail probability.
    JEL: C15 C63
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:trn:utwpde:1003&r=ecm
  4. By: Dennis Kristensen (Columbia University - Department of Economics); Bernard Salanie (Columbia University - Department of Economics)
    Abstract: Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer degree of approximation. The NR step removes some or all of the additional bias and variance of the initial approximate estimator. A Monte Carlo simulation on the mixed logit model shows that noticeable improvements can be obtained rather cheaply.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:clu:wpaper:0910-15&r=ecm
  5. By: Matteo PICCHIO (Tilburg University, Department of Economics, CentER, ReflecT and IZA, Germany); Chiara MUSSIDA (Department of Economics and Social Sciences, Universitˆ Cattolica, Milan and Prometeia spa, Bologna)
    Abstract: Sizeable gender differences in employment rates are observed in many countries. Sample selection into the workforce might therefore be a relevant issue when estimating gender wage gaps. This paper proposes a new semi-parametric estimator of densities in the presence of covariates which incorporates sample selection. We describe a simulation algorithm to implement counterfactual comparisons of densities. The proposed methodology is used to investigate the gender wage gap in Italy. It is found that when sample selection is taken into account gender wage gap widens, especially at the bottom of the wage distribution. Explanations are offered for this empirical finding.
    Keywords: gender wage gap, hazard function, sample selection, glass ceiling, sticky floor
    JEL: C21 C41 J16 J31 J71
    Date: 2010–03–12
    URL: http://d.repec.org/n?u=RePEc:ctl:louvir:2010005&r=ecm
  6. By: Dinghai Xu (Department of Economics, University of Waterloo)
    Abstract: Rapid development in the computer technology has made the financial transaction data visible at an ultimate limit level. The realized volatility, as a proxy for the "true" volatility, can be constructed using the high frequency data. This paper extends a threshold stochastic volatility specification proposed in So, Li and Lam (2002) by incorporating the high frequency volatility measures. Due to the availability of the volatility time series, the parameters estimation can be easily implemented via the standard maximum likelihood estimation (MLE) rather than using the simulated Bayesian methods. In the Monte Carlo section, several mis-specification and sensitivity experiments are conducted. The proposed methodology shows good performance according to the Monte Carlo results. In the empirical study, three stock indices are examined under the threshold stochastic volatility structure. Empirical results show that in different regimes, the returns and volatilities exhibit asymmetric behavior. In addition, this paper allows the threshold in the model to be flexible and uses a sequential optimization based on MLE to search for the "optimal" threshold value. We find that the model with a flexible threshold is always preferred to the model with a fixed threshold according to the log-likelihood measure. Interestingly, the "optimal" threshold is found to be stable across different sampling realized volatility measures.
    JEL: C01 C51
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:wat:wpaper:1003&r=ecm
  7. By: Stefan Hlawatsch (Faculty of Economics and Management, Otto-von-Guericke University Magdeburg); Sebastian Ostrowski (Faculty of Economics and Management, Otto-von-Guericke University Magdeburg)
    Abstract: The aim of our paper is the development of an adequate estimation model for the loss given default, which incorporates the empirically observed bimodality and bounded nature of the distribution. Therefore we introduce an adjusted Expectation Maximization algorithm to estimate the parameters of a univariate mixture distribution, consisting of two beta distributions. Subsequently these estimations are compared with the Maximum Likelihood estimators to test the efficiency and accuracy of both algorithms. Furthermore we analyze our derived estimation model with estimation models proposed in the literature on a synthesized loan portfolio. The simulated loan portfolio consists of possibly loss-influencing parameters that are merged with loss given default observations via a quasi-random approach. Our results show that our proposed model exhibits more accurate loss given default estimators than the benchmark models for different simulated data sets comprising obligor-specific parameters with either high predictive power or low predictive power for the loss given default.
    Keywords: Bimodality, EM Algorithm, Loss Given Default, Maximum Likelihood, Mixture Distribution, Portfolio Simulation
    JEL: C01 C13 C15 C16 C5
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:mag:wpaper:100010&r=ecm
  8. By: Krzysztof Kontek (Artal Investments, Warsaw)
    Abstract: This paper deals with estimating peaked densities over the interval [0,1] using the Un-even Two-Sided Power Distribution (UTP). This distribution is the most complex of all the bounded power distributions introduced by Kotz and van Dorp (2004). The UTP maximum likelihood estimator, a result not derived by Kotz and van Dorp, is presented. The UTP is used to estimate the daily return densities of the DJI and stocks comprising this index. As the returns are found to have high kurtosis values, the UTP provides much more accurate estima-tions than a smooth distribution. The paper presents the program written in Mathematica which calculates maximum likelihood estimators for all members of the bounded power dis-tribution family. The paper demonstrates that the UTP distribution may be extremely useful in estimating peaked densities over the interval [0,1] and in studying financial data.
    Keywords: Density Distribution, Maximum Likelihood Estimation, Stock Returns
    JEL: C01 C02 C13 C16 C46 C87 G10
    Date: 2010–05–08
    URL: http://d.repec.org/n?u=RePEc:wse:wpaper:43&r=ecm
  9. By: Charles S. Bos (VU University Amsterdam); Pawel Janus (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam)
    Abstract: This paper considers spot variance path estimation from datasets of intraday high frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to extend an existing high frequency jump test statistic, to detect arrival times of jumps and to obtain distributional characteristics of detected jumps. The effectiveness of our approach is explored through Monte Carlo simulations. It is shown that sparse sampling for mitigating the impact of microstructure noise has an adverse effect on both spot variance estimation and jump detection. In our approach we can analyze high frequency price observations that are contaminated with microstructure noise without the need for sparse sampling, say at fifteen minute intervals. An empirical illustration is presented for the intraday EUR/USD exchange rates. Our main finding is that fewer jumps are detected when sampling intervals increase.
    Keywords: high frequency; intraday periodicity; jump testing; leverage effect; microstructure noise; pre-averaged bipower variation; spot variance
    JEL: C12 C13 C22 G10 G14
    Date: 2009–12–04
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20090110&r=ecm
  10. By: Yin Liao; Heather Anderson; Farshid Vahid
    Abstract: Realized volatility of stock returns is often decomposed into two distinct components that are attributed to continuous price variation and jumps. This paper proposes a tobit multivariate factor model for the jumps coupled with a standard multivariate factor model for the continuous sample path to jointly forecast volatility in three Chinese Mainland stocks. Out of sample forecast analysis shows that separate multivariate factor models for the two volatility processes outperform a single multivariate factor model of realized volatility, and that a single multivariate factor model of realized volatility outperforms univariate models.
    JEL: C13 C32 C52 C53 G32
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:acb:cbeeco:2010-520&r=ecm
  11. By: Michael P. Clements; David F. Hendry
    Abstract: This chapter describes the issues confronting any realistic context for economic forecasting, which is inevitably based on unknowingly mis-specified models, usually estimated from mis-measured data, facing intermittent and often unanticipated location shifts. We focus on mitigating the systematic forecast failures that result in such settings, and describe the background to our approach, the difficulties of evaluating forecasts, and the devices that are more robust when change occurs.
    Keywords: Economic forecasting, Location shifts, Mis-specified models, Robust forecasts
    JEL: C51 C22
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:484&r=ecm
  12. By: Edward Hoyle; Lane P. Hughston; Andrea Macrina
    Abstract: We develop a non-life reserving model using a stable-1/2 random bridge to simulate the accumulation of paid claims, allowing for an arbitrary choice of a priori distribution for the ultimate loss. Taking a Bayesian approach to the reserving problem, we derive the process of the conditional distribution of the ultimate loss. The `best-estimate ultimate loss process' is given by the conditional expectation of the ultimate loss. We derive explicit expressions for the best-estimate ultimate loss process, and for expected recoveries arising from aggregate excess-of-loss reinsurance treaties. Use of a deterministic time change allows for the matching of any initial (increasing) development pattern for the paid claims. We show that these methods are well-suited to the modelling of claims where there is a non-trivial probability of catastrophic loss. The generalized inverse-Gaussian (GIG) distribution is shown to be a natural choice for the a priori ultimate loss distribution. For particular GIG parameter choices, the best-estimate ultimate loss process can be written as a rational function of the paid-claims process. We extend the model to include a second paid-claims process, and allow the two processes to be dependent. The results obtained can be applied to the modelling of multiple lines of business or multiple origin years. The multidimensional model has the attractive property that the dimensionality of calculations remains low, regardless of the number of paid-claims processes. An algorithm is provided for the simulation of the paid-claims processes.
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1005.0496&r=ecm
  13. By: Marta O Soares (Centre for Health Economics, University of York, UK); L Canto e Castro (Department of Statistics and Operations Research, Faculty of Sciences, University of Lisbon, Portugal.)
    Abstract: The choice of model design for decision analytic models in cost-effectiveness analysis has been the subject of discussion. The current work addresses this issue by noting that, when time is to be explicitly modelled, we need to represent phenomena occurring in continuous time. Multistate models evaluated in continuous time might be used but closed form solutions of expected time in each state may not exist or may be difficult to obtain. Two approximations can then be used for costeffectiveness estimation: (1) simulation models, where continuous time estimates are obtained through Monte Carlo simulation, and (2) discretized models. This work draws recommendations on their use by showing that, when these alternative models can be applied, it is preferable to implement a cohort discretized model than a simulation model. Whilst the bias from the first can be minimized by reducing the cycle length, the second is inherently stochastic. Even though specialized literature advocates this framework, the current practice in economic evaluation is to define clinically meaningful cycle lengths for discretized models, disregarding potential biases.
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:chy:respap:56cherp&r=ecm
  14. By: Grassi, Stefano; Proietti, Tommaso
    Abstract: We apply a recently proposed Bayesian model selection technique, known as stochastic model specification search, for characterising the nature of the trend in macroeconomic time series. We illustrate that the methodology can be quite successfully applied to discriminate between stochastic and deterministic trends. In particular, we formulate autoregressive models with stochastic trends components and decide on whether a specific feature of the series, i.e. the underlying level and/or the rate of drift, are fixed or evolutive.
    Keywords: Bayesian model selection; stationarity; unit roots; stochastic trends; variable selection.
    JEL: E32 C52 C22
    Date: 2010–05–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:22569&r=ecm
  15. By: V. Abramov; F. Klebaner; R. Liptser
    Abstract: The CEV model is given by the stochastic differential equation $X_t=X_0+\int_0^t\mu X_sds+\int_0^t\sigma (X^+_s)^pdW_s$, $\frac{1}{2}\le p<1$. It features a non-Lipschitz diffusion coefficient and gets absorbed at zero with a positive probability. We show the weak convergence of Euler-Maruyama approximations $X_t^n$ to the process $X_t$, $0\le t\le T$, in the Skorokhod metric. We give a new approximation by continuous processes which allows to relax some technical conditions in the proof of weak convergence in \cite{HZa} done in terms of discrete time martingale problem. We calculate ruin probabilities as an example of such approximation. We establish that the ruin probability evaluated by simulations is not guaranteed to converge to the theoretical one, because the point zero is a discontinuity point of the limiting distribution. To establish such convergence we use the Levy metric, and also confirm the convergence numerically. Although the result is given for the specific model, our method works in a more general case of non-Lipschitz diffusion with absorbtion.
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1005.0728&r=ecm
  16. By: Cathy Ning, Dinghai Xu, Tony Wirjanto (Department of Economics, University of Waterloo)
    Abstract: Volatility clustering is a well-known stylized feature of financial asset returns. In this paper, we investigate the asymmetric pattern of volatility clustering on both the stock and foreign exchange rate markets. To this end, we employ copula-based semi-parametric univariate time-series models that accommodate the clusters of both large and small volatilities in the analysis. Using daily realized volatilities of the individual company stocks, stock indices and foreign exchange rates constructed from high frequency data, we find that volatility clustering is strongly asymmetric in the sense that clusters of large volatilities tend to be much stronger than those of small volatilities. In addition, the asymmetric pattern of volatility clusters continues to be visible even when the clusters are allowed to be changing over time, and the volatility clusters themselves remain persistent even after forty days.
    JEL: C51 G32
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:wat:wpaper:1001&r=ecm
  17. By: emmanuel, mamatzakis; george, christodoulakis
    Abstract: We examine the attribution of premium growth rates for the five main insurance sectors of the United Kingdom for the period 1969-2005; in particular, Property, Motor, Pecuniary, Health & Accident, and Liability. In each sector, the growth rates of aggregate insurance premiums are viewed as portfolio returns which we attribute to a number of factors such as realized and expected losses and expenses, their uncertainty and market power, using the Sharpe (1988, 1992) Style Analysis. Our estimation method differs from the standard least squares practice which does not provide confidence intervals for style betas and adopts a Bayesian approach, resulting in a robust estimate of the entire empirical distribution of each beta coefficients for the full sample. We also perform a rolling analysis of robust estimation for a window of seven overlapping samples. Our empirical findings show that there are some main differences across industries as far as the weights attributed to the underlying factors. Rolling regressions assist us to identify the variability of these weights over time, but also across industries.
    Keywords: Insurance Premiums; Monte Carlo Integration; Non-Negativity Constraints; Return Attribution; Sharpe Style Analysis
    JEL: C3 G22 C01
    Date: 2010–03–23
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:22516&r=ecm
  18. By: Antonello D’Agostino (Central Bank of Ireland); Kieran McQuinn (Central Bank of Ireland); Karl Whelan (University College Dublin)
    Abstract: In any dataset with individual forecasts of economic variables, some forecasters will perform better than others. However, it is possible that these ex post differences reflect sampling variation and thus overstate the ex ante differences between forecasters. In this paper, we present a simple test of the null hypothesis that all forecasters in the US Survey of Professional Forecasters have equal ability. We construct a test statistic that reflects both the relative and absolute performance of the forecaster and use bootstrap techniques to compare the empirical results with the equivalents obtained under the null hypothesis of equal forecaster ability. Results suggests limited evidence for the idea that the best forecasters are actually innately better than others, though there is evidence that a relatively small group of forecasters perform very poorly.
    Keywords: Forecasting, Bootstrap
    Date: 2010–04–15
    URL: http://d.repec.org/n?u=RePEc:ucn:wpaper:201012&r=ecm
  19. By: Teresa Bago d'Uva (Erasmus University Rotterdam, and Netspar); Maarten Lindeboom (VU University Amsterdam, and Netspar); Owen O'Donnell (University of Macedonia, University of Lausanne, and Netspar); Eddy van Doorslaer (Erasmus University Rotterdam)
    Abstract: Anchoring vignettes are increasingly used to identify and correct heterogeneity in the reporting of health, work disability, life satisfaction, political efficacy, etc. with the aim of improving interpersonal comparability of subjective indicators of these constructs. The method relies on two assumptions: vignette equivalence – the vignette description is perceived by all to correspond to the same state; and, response consistency - individuals use the same response scales to rate the vignettes and their own situation. We propose tests of these assumptions. For vignette equivalence, we test a necessary condition of no systematic variation with observed characteristics in the perceived difference in states corresponding to any two vignettes. To test response consistency we rely on the assumption that objective indicators fully capture the covariation between the construct of interest and observed individual characteristics, and so offer an alternative way to identify response scales, which can then be compared with those identified from the vignettes. We also introduce a weaker test that is valid under a less stringent assumption. We apply these tests to cognitive functioning and mobility related health problems using data from the English Longitudinal Survey of Ageing. Response consistency is rejected for both health domains according to the first test, but the weaker test does not reject for cognitive functioning. The necessary condition for vignette equivalence is rejected for both health domains. These results cast some doubt on the validity of the vignettes approach, at least as applied to these health domains.
    Keywords: Reporting heterogeneity; Survey methods; Vignettes; Health; Cognition
    JEL: C35 C42 I12
    Date: 2009–11–04
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20090091&r=ecm
  20. By: Hafedh Bouakez; Foued Chihi; Michel Normandin
    Abstract: Measuring the effects of discretionary fiscal policy is both difficult and controversial, as some explicit or implicit identifying assumptions need to be made to isolate exogenous and unanticipated changes in taxes and government spending. Studies based on structural vector autoregressions typically achieve identification by restricting the contemporaneous interaction of fiscal and non-fiscal variables in a rather arbitrary way. In this paper, we relax those restrictions and identify fiscal policy shocks by exploiting the conditional heteroscedasticity of the structural disturbances. We use this methodology to evaluate the macroeconomic effects of fiscal policy shocks in the U.S. before and after 1979. Our results show substantive differences in the economy’s response to government spending and tax shocks across the two periods. Importantly, we find that increases in public spending are, in general, more effective than tax cuts in stimulating economic activity. A key contribution of this study is to provide a formal test of the identifying restrictions commonly used in the literature.
    Keywords: Fiscal policy, Government spending, Taxes, Primary deficit, Structural vector auto-regression, Identification
    JEL: C32 E62 H20 H50 H60
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:lvl:lacicr:1016&r=ecm
  21. By: Charles Bellemare; Luc Bissonnette; Sabine Kröger
    Abstract: We show how bounds around preferences parameters can be estimated under various levels of assumptions concerning the beliefs of senders in the investment game. We contrast these bounds with point estimates of the preference parameters obtained using non-incentivized subjective belief data. Our point estimates suggest that expected responses and social preferences both play a significant role in determining investment in the game. Moreover, these point estimates fall within our most reasonable bounds. This suggests that credible inferences can be obtained using non-incentivized beliefs.
    Keywords: Partial identification, preferences, beliefs, decision making under uncertainty
    JEL: C81
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:lvl:lacicr:1017&r=ecm
  22. By: Gao-Feng Gu; Wei-Xing Zhou
    Abstract: Detrending moving average (DMA) is a widely used method to quantify the correlation of non-stationary signals. We generalize DMA to multifractal detrending moving average (MFDMA), and then extend one-dimensional MFDMA to two-dimensional version. In the paper, we elaborate one-dimensional and two-dimensional MFDMA theoretically and apply the methods to synthetic multifractal measures. We find that the numerical estimations of the multifractal scaling exponent $\tau(q)$ and the multifractal spectrum $f(\alpha)$ are in good agreement with the theoretical values. We also compare the performance of MFDMA with MFDFA, and report that MFDMA is superior to MFDFA when apply them to analysis the properties of one-dimensional and two-dimensional multifractal measures.
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1005.0877&r=ecm

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.