nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒05‒02
29 papers chosen by
Sune Karlsson
Orebro University

  1. Model Selection and Adaptive Markov chain Monte Carlo for Bayesian Cointegrated VAR model By Gareth W. Peters; Balakrishnan Kannan; Ben Lasscock; Chris Mellen
  2. Panel Data Inference under Spatial Dependence By Badi H. Baltagi; Alain Pirotte
  3. Evaluating a class of nonlinear time series models By Heinen, Florian
  4. On rank estimation in semidefinite matrices By Stephen G. Donald; Natércia Fortuna; Vladas Pipiras
  5. Forecast Densities for Economic Aggregates from Disaggregate Ensembles By Francesco Ravazzolo; Shaun P. Vahey
  6. Semiparametric inference in correlated long memory signal plus noise models. By Josu Arteche
  7. Density Based Regression for Inhomogeneous Data: Application to Lottery Experiments By Kontek, Krzysztof
  8. Multivariate Contemporaneous-Threshold Autoregressive Models By Michael J. Dueker; Zacharias Psaradakis; Martin Sola; Fabio Spagnolo
  9. Macroeconomic forecasting and structural change By Antonello D’Agostino; Luca Gambetti; Domenico Giannone
  10. The power of some standard tests of stationarity against changes in the unconditional variance By Ibrahim Ahamada; Mohamed Boutahar
  11. Higher Order Improvements for Approximate Estimators By Dennis Kristensen; Bernard Salanié
  12. Model selection, estimation and forecasting in VAR models with short-run and long-run restrictions By George Athanasopoulos; Osmani Teixeira de Carvalho Guillén; João Victor Issler; Farshid Vahid
  13. Monte Carlo-Based Tail Exponent Estimator By Jozef Barunik; Lukas Vacha
  14. Vast Volatility Matrix Estimation using High Frequency Data for Portfolio Selection By Jianqing Fan; Yingying Li; Ke Yu
  15. Loss Distributions By Burnecki, Krzysztof; Misiorek, Adam; Weron, Rafal
  16. When do improved covariance matrix estimators enhance portfolio optimization? An empirical comparative study of nine estimators By Ester Pantaleo; Michele Tumminello; Fabrizio Lillo; Rosario N. Mantegna
  17. Estimation of Peaked Densities Over the Interval [0,1] Using Two-Sided Power Distribution: Application to Lottery Experiments By Kontek, Krzysztof
  18. Classical vs wavelet-based filters Comparative study and application to business cycle By Ibrahim Ahamada; Philippe Jolivaldt
  19. The K-Step Spatial Sign Covariance Matrix By Croux, C.; Dehon, C.; Yadine, A.
  20. General Equilibrium Restrictions for Dynamic Factor Models By David de Antonio Liedo
  21. On the Optimality of Multivariate S-Estimators By Croux, C.; Dehon, C.; Yadine, A.
  22. State-Dependent Threshold STAR Models By Michael J. Dueker; Zacharias Psaradakis; Martin Sola; Fabio Spagnolo
  23. A Multiple Break Panel Approach to Estimating United States Phillips Curves By Bill Russell; Anindya Banerjee; Issam Malki; Natalia Ponomareva
  24. Evaluating Nationwide Health Interventions When Standard Before-After Doesn’t Work: Malawi's ITN Distribution Program By Deuchert, Eva; Wunsch, Conny
  25. Evaluating Macroeconomic Forecasts: A Review of Some Recent Developments By Philip Hans Franses; Michael McAleer; Rianne Legerstee
  26. Joint Estimation of Supply and Use Tables By Temurshoev, Umed
  27. We Know What You Choose! External Validity of Discrete Choice Models By R. Karina Gallardo; Jaebong Chang
  28. Skew-symmetric Distributions and Fisher Information - A Tale of Two Densities By Marc Hallin; Christophe Ley
  29. Estimation of Risk-Neutral Density Surfaces By A. M. Monteiro; R. H. Tütüncü; L. N. Vicente

  1. By: Gareth W. Peters; Balakrishnan Kannan; Ben Lasscock; Chris Mellen
    Abstract: This paper develops a matrix-variate adaptive Markov chain Monte Carlo (MCMC) methodology for Bayesian Cointegrated Vector Auto Regressions (CVAR). We replace the popular approach to sampling Bayesian CVAR models, involving griddy Gibbs, with an automated efficient alternative, based on the Adaptive Metropolis algorithm of Roberts and Rosenthal, (2009). Developing the adaptive MCMC framework for Bayesian CVAR models allows for efficient estimation of posterior parameters in significantly higher dimensional CVAR series than previously possible with existing griddy Gibbs samplers. For a n-dimensional CVAR series, the matrix-variate posterior is in dimension $3n^2 + n$, with significant correlation present between the blocks of matrix random variables. We also treat the rank of the CVAR model as a random variable and perform joint inference on the rank and model parameters. This is achieved with a Bayesian posterior distribution defined over both the rank and the CVAR model parameters, and inference is made via Bayes Factor analysis of rank. Practically the adaptive sampler also aids in the development of automated Bayesian cointegration models for algorithmic trading systems considering instruments made up of several assets, such as currency baskets. Previously the literature on financial applications of CVAR trading models typically only considers pairs trading (n=2) due to the computational cost of the griddy Gibbs. We are able to extend under our adaptive framework to $n >> 2$ and demonstrate an example with n = 10, resulting in a posterior distribution with parameters up to dimension 310. By also considering the rank as a random quantity we can ensure our resulting trading models are able to adjust to potentially time varying market conditions in a coherent statistical framework.
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1004.3830&r=ecm
  2. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Alain Pirotte (ERMES (CNRS) and TEPP (CNRS), Université Panthéon-Assas Paris II, France INRETS-DEST, National Institute of Research on Transports and Safety, France)
    Abstract: This paper focuses on inference based on the usual panel data estimators of a one-way error component regression model when the true specification is a spatial error component model. Among the estimators considered, are pooled OLS, random and fixed effects, maximum likelihood under normality, etc. The spatial effects capture the cross-section dependence, and the usual panel data estimators ignore this dependence. Two popular forms of spatial autocorrelation are considered, namely, spatial auto-regressive random effects (SAR-RE) and spatial moving average random effects (SMA-RE). We show that when the spatial coefficients are large, test of hypothesis based on the usual panel data estimators that ignore spatial dependence can lead to misleading inference.
    Keywords: Panel data; Hausman test; Random effect; Spatial autocorrelation; Maximum Likelihood.
    JEL: C33
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:123&r=ecm
  3. By: Heinen, Florian
    Abstract: We consider a recently proposed class of nonlinear time series models and focus mainly on misspecification testing for models of such type. Following the modeling cycle for nonlinear time series models of specification, estimation and evaluation we first treat how to choose an adequate transition function and then contribute to the evaluation stage by proposing tests against serial correlation, no remaining nonlinearity and parameter constancy. We also consider evaluation by generalized impulse response functions. The finite sample properties of the proposed tests are studied via simulation. We illustrate the use of these methods by an application to real exchange rate data.
    Keywords: Nonlinearities, Smooth transition, Specification testing, Real exchange rates
    JEL: C12 C22 C52
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-445&r=ecm
  4. By: Stephen G. Donald (Department of Economics, University of Texas at Austin); Natércia Fortuna (CEF.UP, Universidade do Porto); Vladas Pipiras (University of North Carolina at Chapel Hill)
    Abstract: This work concerns the problem of rank estimation in semidefinite matrices, having either indefinite or semidefinite matrix estimator satisfying a typical asymptotic normality condition. Several rank tests are examined, based on either available rank tests or basic new results. A number of related issues are discussed such as the choice of matrix estimators and rank tests based on finer assumptions than those of asymptotic normality of matrix estimators. Several examples where rank estimation in semidefinite matrices is of interest are studied and serve as guide throughout the work.
    Keywords: rank, symmetric matrix, indefinite and semidefinite estimators, eigenvalues, matrix decompositions, estimation, asymptotic normality.
    JEL: C12 C13
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:por:cetedp:1002&r=ecm
  5. By: Francesco Ravazzolo; Shaun P. Vahey
    Abstract: We propose a methodology for producing forecast densities for economic aggregates based on disaggregate evidence. Our ensemble predictive methodology utilizes a linear mixture of experts framework to combine the forecast densities from potentially many component models. Each component represents the univariate dynamic process followed by a single disaggregate variable. The ensemble produced from these components approximates the many unknown relationships between the disaggregates and the aggregate by using time-varying weights on the component forecast densities. In our application, we use the disaggregate ensemble approach to forecast US Personal Consumption Expenditure inflation from 1997Q2 to 2008Q1. Our ensemble combining the evidence from 11 disaggregate series outperforms an aggregate autoregressive benchmark, and an aggregate time-varying parameter specification in density forecasting.
    JEL: C11 C32 C53 E37 E52
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:acb:camaaa:2010-10&r=ecm
  6. By: Josu Arteche (Dpt. Economía Aplicada III (Econometría y Estadística))
    Abstract: This paper proposes an extension of the log periodogram regression in perturbed long memory series that accounts for the added noise, also allowing for correlation between signal and noise, which represents a common situation in many economic and financial series. Consistency (for d < 1) and asymptotic normality (for d < 3/4) are shown with the same bandwidth restriction as required for the original log periodogram regression in a fully observable series, with the corresponding gain in asymptotic efficiency and faster convergence over competitors. Local Wald, Lagrange Multiplier and Hausman type tests of the hypothesis of no correlation between the latent signal and noise are also proposed.
    Keywords: long memory; signal plus noise; log periodogram regression; semiparametric inference
    JEL: C22 C13
    Date: 2010–04–27
    URL: http://d.repec.org/n?u=RePEc:ehu:biltok:201004&r=ecm
  7. By: Kontek, Krzysztof
    Abstract: This paper presents a regression procedure for inhomogeneous data characterized by varying variance, skewness and kurtosis or by an unequal amount of data over the estimation domain. The concept is based first on the estimation of the densities of an observed variable for given values of explanatory variable(s). These density functions are then used to estimate the relation between all the variables. The mean, quantile (including median) and mode re-gression estimators are proposed, with the last one appearing to be the maximum likelihood estimator in the density based approach. The paper demonstrates the advantages of the pro-posed methodology, which eliminates most of the estimation problems arising from data inhomogeneity. These include the computational inconveniences of the standard quantile and mode regression techniques. The proposed methodology, when applied to lottery experiments, makes it possible to confirm and to extend the previously presented conclusion (Kontek, 2010) that lottery valuations are only nonlinear with respect to probability when medians and means are considered. Such nonlinearity disappears once modes are considered. This means that the most likely behavior of a group is fully rational. The regression procedure presented in this paper is, however, very general and may be applied in many other cases of data inhomogeneity and not just lottery experiments.
    Keywords: Density Distribution; Least Squares; Quantile; Median; Mode; Maximum Likelihood Estimators; Lottery experiments; Relative Utility Function; Prospect Theory.
    JEL: C16 C51 D81 C46 C91 C13 C81 C21 C01 D87
    Date: 2010–04–21
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:22268&r=ecm
  8. By: Michael J. Dueker; Zacharias Psaradakis; Martin Sola; Fabio Spagnolo
    Abstract: This paper proposes a contemporaneous-threshold multivariate smooth transition autoregressive (C-MSTAR) model in which the regime weights depend on the ex ante probabilities that latent regime-specific variables exceed certain threshold values. A key feature of the model is that the transition function depends on all the parameters of the model as well as on the data. Since the mixing weights are also a function of the regime-specific innovation covariance matrix, the model can account for contemporaneous regime-specific co-movements of the variables. The stability and distributional properties of the proposed model are discussed, as well as issues of estimation, testing and forecasting. The practical usefulness of the C-MSTAR model is illustrated by examining the relationship between US stock prices and interest rates.
    Keywords: Nonlinear autoregressive model; Smooth transition; Stability; Threshold.
    JEL: C32 G12
    Date: 2010–04–22
    URL: http://d.repec.org/n?u=RePEc:aub:autbar:817.10&r=ecm
  9. By: Antonello D’Agostino (Central Bank and Financial Services Authority of Ireland – Economic Analysis and Research Department, PO Box 559 – Dame Street, Dublin 2, Ireland.); Luca Gambetti (Office B3.174, Departament d’Economia i Historia Economica, Edifici B, Universitat Autonoma de Barcelona, Bellaterra 08193, Barcelona, Spain.); Domenico Giannone (ECARES Université Libre de Bruxelles, 50, Avenue Roosevelt CP 114 Brussels, Belgium.)
    Abstract: The aim of this paper is to assess whether explicitly modeling structural change increases the accuracy of macroeconomic forecasts. We produce real time out-of-sample forecasts for inflation, the unemployment rate and the interest rate using a Time-Varying Coefficients VAR with Stochastic Volatility (TV-VAR) for the US. The model generates accurate predictions for the three variables. In particular for inflation the TV-VAR outperforms, in terms of mean square forecast error, all the competing models: fixed coefficients VARs, Time-Varying ARs and the na¨ıve random walk model. These results are also shown to hold over the most recent period in which it has been hard to forecast inflation. JEL Classification: C32, E37, E47.
    Keywords: Forecasting, Inflation, Stochastic Volatility, Time Varying Vector Autoregression.
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20101167&r=ecm
  10. By: Ibrahim Ahamada (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I); Mohamed Boutahar (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579)
    Abstract: Abrupt changes in the unconditional variance of returns have been recently revealed in many empirical studies. In this paper, we show that traditional KPSS-based tests have a low power against nonstationarities stemming from changes in the unconditional variance. More precisely, we show that even under very strong abrupt changes in the unconditional variance, the asymptotic moments of the statistics of these tests remain unchanged. To overcome this problem, we use some CUSUM-based tests adapted for small samples. These tests do not compete with KPSS-based tests and can be considered as complementary. CUSUM-based tests confirm the presence of strong abrupt changes in the unconditional variance of stock returns, whereas KPSS-based tests do not. Consequently, traditional stationary models are not always appropriate to describe stock returns. Finally, we show how a model allowing abrupt changes in the unconditional variance is well appropriate for CAC 40 stock returns.
    Keywords: KPSS test, panel stationarity test, unconditional variance, abrupt changes, stock returns, size-power curve.
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00476024_v1&r=ecm
  11. By: Dennis Kristensen (Columbia University); Bernard Salanié (Columbia University)
    Abstract: Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer degree of approximation. The NR step removes some or all of the additional bias and variance of the initial approximate estimator. A Monte Carlo simulation on the mixed logit model shows that noticeable improvements can be obtained rather cheaply.
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:kud:kuieca:2010_04&r=ecm
  12. By: George Athanasopoulos; Osmani Teixeira de Carvalho Guillén; João Victor Issler; Farshid Vahid
    Abstract: We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties as well as the traditional ones. We suggest a new two-step model selection procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank using our proposed procedure, relative to an unrestricted VAR or a cointegrated VAR estimated by the commonly used procedure of selecting the lag-length only and then testing for cointegration. Two empirical applications forecasting Brazilian inflation and U.S. macroeconomic aggregates growth rates respectively show the usefulness of the model-selection strategy proposed here. The gains in different measures of forecasting accuracy are substantial, especially for short horizons.
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:bcb:wpaper:205&r=ecm
  13. By: Jozef Barunik (Institute of Economic Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic; Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Prague); Lukas Vacha (Institute of Economic Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic; Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Prague)
    Abstract: In this paper we study the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our method is not sensitive to the choice of k and works well also on small samples. The new estimator gives unbiased results with symmetrical con_dence intervals. Finally, we demonstrate the power of our estimator on the main world stock market indices. On the two separate periods of 2002-2005 and 2006-2009 we estimate the tail exponent.
    Keywords: Hill estimator, α-stable distributions, tail exponent estimation
    JEL: C1 C13 C15 G0
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:fau:wpaper:wp2010_06&r=ecm
  14. By: Jianqing Fan; Yingying Li; Ke Yu
    Abstract: Portfolio allocation with gross-exposure constraint is an effective method to increase the efficiency and stability of selected portfolios among a vast pool of assets, as demonstrated in Fan et al (2008). The required high-dimensional volatility matrix can be estimated by using high frequency financial data. This enables us to better adapt to the local volatilities and local correlations among vast number of assets and to increase significantly the sample size for estimating the volatility matrix. This paper studies the volatility matrix estimation using high-dimensional high-frequency data from the perspective of portfolio selection. Specifically, we propose the use of "pairwise-refresh time" and "all-refresh time" methods proposed by Barndorff-Nielsen et al (2008) for estimation of vast covariance matrix and compare their merits in the portfolio selection. We also establish the concentration inequalities of the estimates, which guarantee desirable properties of the estimated volatility matrix in vast asset allocation with gross exposure constraints. Extensive numerical studies are made via carefully designed simulations. Comparing with the methods based on low frequency daily data, our methods can capture the most recent trend of the time varying volatility and correlation, hence provide more accurate guidance for the portfolio allocation in the next time period. The advantage of using high-frequency data is significant in our simulation and empirical studies, which consist of 50 simulated assets and 30 constituent stocks of Dow Jones Industrial Average index.
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1004.4956&r=ecm
  15. By: Burnecki, Krzysztof; Misiorek, Adam; Weron, Rafal
    Abstract: This paper is intended as a guide to statistical inference for loss distributions. There are three basic approaches to deriving the loss distribution in an insurance risk model: empirical, analytical, and moment based. The empirical method is based on a sufficiently smooth and accurate estimate of the cumulative distribution function (cdf) and can be used only when large data sets are available. The analytical approach is probably the most often used in practice and certainly the most frequently adopted in the actuarial literature. It reduces to finding a suitable analytical expression which fits the observed data well and which is easy to handle. In some applications the exact shape of the loss distribution is not required. We may then use the moment based approach, which consists of estimating only the lowest characteristics (moments) of the distribution, like the mean and variance. Having a large collection of distributions to choose from, we need to narrow our selection to a single model and a unique parameter estimate. The type of the objective loss distribution can be easily selected by comparing the shapes of the empirical and theoretical mean excess functions. Goodness-of-fit can be verified by plotting the corresponding limited expected value functions. Finally, the hypothesis that the modeled random event is governed by a certain loss distribution can be statistically tested.
    Keywords: Loss distribution; Insurance risk model; Random variable generation; Goodness-of-fit testing; Mean excess function; Limited expected value function
    JEL: G22 C46 C15
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:22163&r=ecm
  16. By: Ester Pantaleo; Michele Tumminello; Fabrizio Lillo; Rosario N. Mantegna
    Abstract: The use of improved covariance matrix estimators as an alternative to the sample estimator is considered an important approach for enhancing portfolio optimization. Here we empirically compare the performance of 9 improved covariance estimation procedures by using daily returns of 90 highly capitalized US stocks for the period 1997-2007. We find that the usefulness of covariance matrix estimators strongly depends on the ratio between estimation period T and number of stocks N, on the presence or absence of short selling, and on the performance metric considered. When short selling is allowed, several estimation methods achieve a realized risk that is significantly smaller than the one obtained with the sample covariance method. This is particularly true when T/N is close to one. Moreover many estimators reduce the fraction of negative portfolio weights, while little improvement is achieved in the degree of diversification. On the contrary when short selling is not allowed and T>N, the considered methods are unable to outperform the sample covariance in terms of realized risk but can give much more diversified portfolios than the one obtained with the sample covariance. When T<N the use of the sample covariance matrix and of the pseudoinverse gives portfolios with very poor performance.
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1004.4272&r=ecm
  17. By: Kontek, Krzysztof
    Abstract: This paper deals with estimating peaked densities over the interval [0,1] using two-sided power distribution (Kotz, van Dorp, 2004). Such data were encountered in experiments determining certainty equivalents of lotteries (Kontek, 2010). This paper summarizes the basic properties of the two-sided power distribution (TP) and its generalized form (GTP). The GTP maximum likelihood estimator, a result not derived by Kotz and van Dorp, is presented. The TP and GTP are used to estimate certainty equivalent densities in two data sets from lottery experiments. The obtained results show that even a two-parametric TP distribution provides more accurate estimates than the smooth three-parametric generalized beta distribution GBT (Libby, Novick, 1982) in one of the considered data sets. The three-parametric GTP distribution outperforms GBT for these data. The results are, however, the very opposite for the second data set, in which the data are greatly scattered. The paper demonstrates that the TP and GTP distributions may be extremely useful in estimating peaked densities over the interval [0,1] and in studying the relative utility function.
    Keywords: Density Distribution; Maximum Likelihood Estimation; Lottery experiments; Relative Utility Function.
    JEL: C51 C16 C46 C91 C13 C02 C21 C01 D87
    Date: 2010–04–28
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:22378&r=ecm
  18. By: Ibrahim Ahamada (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I); Philippe Jolivaldt (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I)
    Abstract: In this article, we compare the performance of Hodrickk-Prescott and Baxter-King filters with a method of filtering based on the multi-resolution properties of wavelets. We show that overall the three methods remain comparable if the theoretical cyclical component is defined in the usual waveband, ranging between six and thirty two quarters. However the approach based on wavelets provides information about the business cycle, for example, its stability over time which the other two filters do not provide. Based on Monte Carlo simulation experiments, our method applied to the American GDP using growth rate data shows that the estimate of the business cycle component is richer in information than that deduced from the level of GDP and includes additional information about the post 1980 period of great moderation.
    Keywords: Filters, HP, BK, wavelets, Monte Carlo Simulation break, business cycles.
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00476022_v1&r=ecm
  19. By: Croux, C.; Dehon, C.; Yadine, A. (Tilburg University, Center for Economic Research)
    Abstract: The Sign Covariance Matrix is an orthogonal equivariant estimator of mul- tivariate scale. It is often used as an easy-to-compute and highly robust estimator. In this paper we propose a k-step version of the Sign Covariance Matrix, which improves its e±ciency while keeping the maximal breakdown point. If k tends to infinity, Tyler's M-estimator is obtained. It turns out that even for very low values of k, one gets almost the same e±ciency as Tyler's M-estimator.
    JEL: C13 C14
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:201041&r=ecm
  20. By: David de Antonio Liedo (Banco de España)
    Abstract: This paper proposes the use of dynamic factor models as an alternative to the VAR-based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Beyond the weak restrictions, which are given by the number of shocks and the number of state variables, the behavioural restrictions embedded in the utility and production functions of the model economy contribute to achieve further parsimony. Such parsimony reduces the number of parameters to be estimated, potentially helping the general equilibrium environment improve forecast accuracy. In turn, the DSGE model is considered to be misspecified when it is outperformed by the state-space representation that only incorporates the weak restrictions.
    Keywords: dynamic and static rank, factor models, DSGE models, forecasting
    JEL: E32 E37 C52
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:bde:wpaper:1012&r=ecm
  21. By: Croux, C.; Dehon, C.; Yadine, A. (Tilburg University, Center for Economic Research)
    Abstract: In this paper we maximize the efficiency of a multivariate S-estimator under a constraint on the breakdown point. In the linear regression model, it is known that the highest possible efficiency of a maximum breakdown S-estimator is bounded above by 33% for Gaussian errors. We prove the surprising result that in dimensions larger than one, the efficiency of a maxi- mum breakdown S-estimator of location and scatter can get arbitrarily close to 100%, by an appropriate selection of the loss function.
    Keywords: Breakdown point;Multivariate Location and Scatter;Robustness;S-estimator.
    JEL: C13 C14
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:201039&r=ecm
  22. By: Michael J. Dueker; Zacharias Psaradakis; Martin Sola; Fabio Spagnolo
    Abstract: In this paper we consider extensions of smooth transition autoregressive (STAR) models to situations where the threshold is a time-varying function of variables that affect the separation of regimes of the time series under consideration. Our specification is motivated by the observation that unusually high/low values for an economic variable may sometimes be best thought of in relative terms. State-dependent logistic STAR and contemporaneous-threshold STAR models are introduced and discussed. These models are also used to investigate the dynamics of U.S. short-term interest rates, where the threshold is allowed to be a function of past output growth and inflation.
    Keywords: Nonlinear autoregressive models; Smooth transition; Threshold; Interest rates.
    JEL: C22 E43
    Date: 2010–04–22
    URL: http://d.repec.org/n?u=RePEc:aub:autbar:818.10&r=ecm
  23. By: Bill Russell; Anindya Banerjee; Issam Malki; Natalia Ponomareva
    Abstract: Phillips curves are often estimated without due attention to the underlying time series properties of the data. In particular, the consequences of inflation having discrete breaks in mean have not been studied adequately. We show by means of simulations and a detailed empirical example based on United States data that not taking account of breaks may lead to biased and therefore spurious estimates of Phillips curves. We suggest a method to account for the breaks in mean inflation and obtain meaningful and unbiased estimates of the short- and long-run Phillips curves in the United States.
    Keywords: Phillips curve, inflation, panel data, non-stationery data, breaks
    JEL: C22 C23 E31
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:bir:birmec:10-14&r=ecm
  24. By: Deuchert, Eva (University of St. Gallen); Wunsch, Conny (University of St. Gallen)
    Abstract: Nationwide health interventions are difficult to evaluate as contemporaneous control groups do not exist and before-after approaches are usually infeasible. We propose an alternative semi-parametric estimator that is based on the assumption that the intervention has no direct effect on the health outcome but influences the outcome only through its effect on individual behavior. We show that in this case the evaluation problem can be divided into two parts: (i) the effect of the intervention on behavior, for which a conditional before-after assumption is more plausible; and (ii) the effect of the behavior on the health outcome, where we exploit that a contemporaneous control groups exists for behavior. The proposed estimator is used to evaluate one of Malawi's main malaria prevention campaigns, a nationwide insecticide-treated-net (ITN) distribution scheme, in terms of its effect on infant mortality. We exploit that the program affects child mortality only via bed net usage. We find that Malawi's ITN distribution campaign reduced child mortality by 1 percentage point, which corresponds to about 30% of the total reduction in infant mortality over the study period.
    Keywords: treatment effect, semi-parametric estimation, health intervention
    JEL: C14 C21 I18
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4896&r=ecm
  25. By: Philip Hans Franses; Michael McAleer (University of Canterbury); Rianne Legerstee
    Abstract: Macroeconomic forecasts are frequently produced, published, discussed and used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are based on econometric model forecasts as well as on human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model, the other forecast, and intuition; and (iii) the two forecasts are generated from two distinct combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the Federal Reserve Board and the FOMC on inflation, unemployment and real GDP growth.
    Keywords: Macroeconomic forecasts; econometric models; human intuition; biased forecasts; forecast performance; forecast evaluation; forecast comparison
    JEL: C22 C51 C52 C53 E27 E37
    Date: 2010–03–01
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:10/09&r=ecm
  26. By: Temurshoev, Umed (Groningen University)
    Abstract: We propose a new biproportional method specifically designed for joint projection of Supply and Use tables (SUTs). In contrast to standard inputoutput techniques, this method does not require the availability of total outputs by product for the projection year(s), a condition which is not often met in practice. The algorithm, called the SUT-RAS method, jointly estimates SUTs that are immediately consistent. It is applicable to different settings of SUTs, such as the frameworks with basic prices and purchasers? prices, and a setting in which Use tables are separated into domestic and imported uses. Our empirical evaluations show that the SUT-RAS method performs quite well compared to widely used short-cut methods.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:dgr:rugggd:gd-116&r=ecm
  27. By: R. Karina Gallardo; Jaebong Chang (School of Economic Sciences, Washington State University)
    Abstract: For over the last thirty years the multinomial logit model has been the standard in choice modeling. Development in econometrics and computational algorithms has led to the increasing tendency to opt for more flexible models able to depict more realistically choice behavior. This study compares three discrete choice models, the standard multinomial logit, the error components logit, and the random parameters logit. Data were obtained from two choice experiments conducted to investigate consumers’ preferences for fresh pears receiving several postharvest treatments. Model comparisons consisted of in-sample and holdout sample evaluations. Results show that product characteristics hence, datasets, influence model performance. We also found that the multinomial logit model outperformed in at least one of three evaluations in both datasets. Overall, findings signal the need for further studies controlling for context and dataset to have more conclusive cues for discrete choice models capabilities.
    Keywords: discrete choice models, validation, holdout sample
    JEL: C25 D12
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:wsu:wpaper:gallardo-6&r=ecm
  28. By: Marc Hallin; Christophe Ley
    Abstract: Skew-symmetric densities recently received much attention in the literature, giving rise to increasingly general families of univariate and multivariate skewed densities. Most of those families, however, suffer from the major drawback of a potentially singular Fisher information in the vicinity of symmetry. All existing results indicate that Gaussian densities (possibly after restriction to some linear subspace) play a very special and somewhat mysterious role in that context. We totally dispel that widespread opinion by providing a full characterization of the information singularity phenomenon, highlighting its relation to a possible link between symmetric kernels and skewing functions—a link that can be interpreted as the mismatch of two densities.
    Keywords: Skewing function, Skew-normal distributions, skew-symmetric distributions, singular Fisher information, symmetric kernel.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2010_014&r=ecm
  29. By: A. M. Monteiro (GEMF/Faculdade de Economia, Universidade de Coimbra, Portugal); R. H. Tütüncü (Goldman Sachs Asset Management); L. N. Vicente (CMUC, Department of Mathematics, University of Coimbra, Portugal)
    Abstract: Option price data is often used to infer risk-neutral densities for future prices of an underlying asset. Given the prices of a set of options on the same underlying asset with different strikes and maturities, we propose a nonparametric approach for estimating risk-neutral densities associated with several maturities. Our method uses bicubic splines in order to achieve the desired smoothness for the estimation and an optimization model to choose the spline functions that best fit the price data. Semidefinite programming is employed to guarantee the nonnegativity of the densities. We illustrate the process using synthetic option price data generated using log-normal and absolute diffusion processes as well as actual price data for options on the S&P500 index. We also used the risk-neutral densities that we computed to price exotic options and observed that this approach generates prices that closely approximate the market prices of these options.
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:gmf:wpaper:2010-06&r=ecm

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.