nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒01‒03
thirty-one papers chosen by
Sune Karlsson
Orebro University

  1. Bandspectrum Cointegration By Andersson, Fredrik N. G.
  2. Model and distribution uncertainty in multivariate GARCH estimation: a Monte Carlo analysis By Rossi, Eduardo; Spazzini, Filippo
  3. Continuous Empirical Characteristic Function Estimation of Mixtures of Normal Parameters By Dinghai Xu; John Knight
  4. Out-of-sample comparison of copula specifications in multivariate density forecasts By Diks, C.G.H.; Dijk, D. van; Panchenko, V.
  5. Combining inflation density forecasts By Christian Kascha; Francesco Ravazzolo
  6. Exact inference in diagnosing value-at-risk estimates: A Monte Carlo device By Herwartz, Helmut
  7. BANDWIDTH SELECTION FOR SPATIAL HAC AND OTHER ROBUST COVARIANCE ESTIMATORS By Dayton M. Lambert; Raymond J.G.M. Florax; Seong-Hoon Cho
  8. Multivariate realised kernels: consistent positive semi-definite estimators of the covariation of equity prices with noise and non-synchronous trading By Ole E. Barndorff-Nielsen; Peter Reinhard Hansen; Asger Lunde; Neil Shephard
  9. Panel Error Correction Testing with Global Stochastic Trends By Gengenbach Christian; Urbain Jean-Pierre; Westerlund Joakim
  10. Simplified Implementation of the Heckman Estimator of the Dynamic Probit Model and a Comparison with Alternative Estimators By Arulampalam, Wiji; Stewart, Mark B.
  11. Asymmetric Stochastic Conditional Duration Model --A Mixture of Normals Approach" By Dinghai Xu; John Knight; Tony S. Wirjanto
  12. A new method of projection-based inference in GMM with weakly identified nuisance parameters By Saraswata Chaudhuri; Eric Zivot
  13. Identifying Dynamic Games with Serially-Correlated Unobservables By Yingyao Hu; Matthew Shum
  14. Time-Deformation Modeling Of Stock Returns Directed By Duration Processes By Dingan Feng; Peter X.-K. Song; Tony S. Wirjanto
  15. Multiple imputation of right-censored wages in the German IAB Employment Sample considering heteroscedasticity By Büttner, Thomas; Rässler, Susanne
  16. Forecast Evaluation of Small Nested Model Sets By Kirstin Hubrich; Kenneth D. West
  17. Structural Dynamic Conditional Correlation By Enzo Weber
  18. On relative efficiency of Quasi-MLE and GMM estimators of covariance structure models By Artem Prokhorov
  19. Early estimates of euro area real GDP growth - a bottom up approach from the production side. By Elke Hahn; Frauke Skudelny
  20. An Empirical Characteristic Function Approach to VaR under a Mixture of Normal Distribution with Time-Varying Volatility By Dinghai Xu; Tony S. Wirjanto
  21. Extension of Random Matrix Theory to the L-moments for Robust Portfolio Allocation By Ghislain Yanou
  22. Estimating the output gap in real time: A factor model approach By Knut Are Aastveit; Tørres G. Trovik
  23. Interdependent Durations, Second Version By Bo E. Honore; Aureo de Paula
  24. Bounds on Revenue Distributions in Counterfactual Auctions with Reserve Prices By Xun Tang
  25. Solving Linear Rational Expectations Models with Predictable Structural Changes By Adam Cagliarini; Mariano Kulish
  26. Note on New Prospects on Vines By Dominique Guegan; Pierre-André Maugis
  27. A Parametric Control Function Approach to Estimating the Returns to Schooling in the Absence of Exclusion Restrictions: An Application to the NLSY By Francis Vella; Lídia Farré; Roger Klein
  28. Impact of Small Group Size on Neighborhood Influences in Multilevel Models By Theall, Katherine P.; Scribner, Richard; Lynch , Sara; Simonsen, Neal; Schonlau, Matthias; Carlin, Bradley; Cohen, Deborah
  29. Climbing the Drug Staircase: A Bayesian Analysis of the Initiation of Hard Drug Use By Bretteville-Jensen, Anne Line; Jacobi, Liana
  30. 3-Regime symmetric STAR modeling and exchange rate reversion By Mario Cerrato; Hyunsok Kim; Ronald MacDonald
  31. Importance sampling for backward SDEs By Thilo Moseler; Christian Bender

  1. By: Andersson, Fredrik N. G. (Department of Economics, Lund University)
    Abstract: Economic theory commonly distinguishes between different time horizons such as the short run and the long run, each with its own relationships and its own dynamics. Engle (1974) proposed a bandspectrum regression to estimate such models. This paper proposes a new estimator for non-stationary panel data models, a bandspectrum cointegration estimator. The bandspectrum cointegration estimator uses first differenced data to avoid spurious results. Such estimates are, however, less efficient than estimates from a model with non-stationary data. Still, simulation results in the paper show that the bandspectrum cointegration estimator is more efficient than common time domain estimators, for example VECM and OLS levels estimators, if the data generating process contains more than one time horizon. The BSCE furthermore identifies all horizons in the data generating process and estimates an individual parameter vector for each, a property that neither time domain estimator possesses.
    Keywords: Cointegration; Bandspectrum Regression; Simulations; Wavelets; Frequency domain
    JEL: C14 C15 C23
    Date: 2008–12–02
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2008_018&r=ecm
  2. By: Rossi, Eduardo; Spazzini, Filippo
    Abstract: Multivariate GARCH models are in principle able to accommodate the features of the dynamic conditional correlations processes, although with the drawback, when the number of financial returns series considered increases, that the parameterizations entail too many parameters.In general, the interaction between model parametrization of the second conditional moment and the conditional density of asset returns adopted in the estimation determines the fitting of such models to the observed dynamics of the data. This paper aims to evaluate the interactions between conditional second moment specifications and probability distributions adopted in the likelihood computation, in forecasting volatilities and covolatilities. We measure the relative performances of alternative conditional second moment and probability distributions specifications by means of Monte Carlo simulations, using both statistical and financial forecasting loss functions.
    Keywords: Multivariate GARCH models; Model uncertainty; Quasi-maximum likelihood; Monte Carlo methods
    JEL: C32 C52 C01
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:12260&r=ecm
  3. By: Dinghai Xu (Department of Economics, University of Waterloo); John Knight (Department of Economics, University of Western Ontario)
    Abstract: This paper develops an e±cient method for estimating the discrete mix- tures of normal family based on the continuous empirical characteristic function (CECF). An iterated estimation procedure based on the closed form objective distance function is proposed to improve the estimation effciency. The results from the Monte Carlo simulation reveal that the CECF estimator produces good finite sample properties. In particular, it outperforms the discrete type of methods when the maximum likelihood estimation fails to converge. An empirical example is provided for illustrative purposes.
    Keywords: Empirical characteristic function; Mixtures of normal.
    JEL: C13 C15 C16
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:wat:wpaper:08006&r=ecm
  4. By: Diks, C.G.H. (Universiteit van Amsterdam); Dijk, D. van (Erasmus Universiteit Rotterdam); Panchenko, V. (University of New South Wales)
    Abstract: We introduce a statistical test for comparing the predictive accuracy of competing copula specifications in multivariate density forecasts, based on the Kullback-Leibler Information Criterion (KLIC). The test is valid under general conditions: in particular it allows for parameter estimation uncertainty and for the copulas to be nested or non-nested. Monte Carlo simulations demonstrate that the proposed test has satisfactory size and power properties in finite samples. Applying the test to daily exchange rate returns of several major currencies against the US dollar we find that the Student's t copula is favored over Gaussian, Gumbel and Clayton copulas. This suggests that these exchange rate returns are characterized by symmetric tail dependence.
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:ams:ndfwpp:08-10&r=ecm
  5. By: Christian Kascha (Norges Bank (Central Bank of Norway)); Francesco Ravazzolo (Norges Bank (Central Bank of Norway))
    Abstract: In this paper, we empirically evaluate competing approaches for combining inflation density forecasts in terms of Kullback-Leibler divergence. In particular, we apply a similar suite of models to four different data sets and aim at identifying combination methods that perform well throughout different series and variations of the model suite. We pool individual densities using linear and logarithmic combination methods. The suite consists of linear forecasting models with moving estimation windows to account for structural change. We find that combining densities is a much better strategy than selecting a particular model ex-ante. While combinations do not always perform better than the best individual model, combinations always yield accurate forecasts and, as we show analytically, provide insurance against selecting inappropriate models. Combining with equal weights often outperforms other weighting schemes. Also, logarithmic combinations can be advantageous, in particular if symmetric densities are preferred.
    Keywords: Forecast Combination, Logarithmic Combinations, Density Forecasts, Inflation Forecasting
    JEL: C53 E37
    Date: 2008–12–12
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2008_22&r=ecm
  6. By: Herwartz, Helmut
    Abstract: In this note a Monte Carlo approach is suggested to determine critical values for diagnostic tests of Value-at-Risk models that rely on binary random variables. Monte Carlo testing offers exact significance levels in finite samples. Conditional on exact critical values the dynamic quantile test suggested by Engle and Manganelli (2004) turns out more powerful than a recently proposed Portmanteau type test (Hurlin and Tokpavi 2006).
    Keywords: Value-at-Risk, Monte Carlo test
    JEL: C22 C52 G28
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:7411&r=ecm
  7. By: Dayton M. Lambert (Department of Agricultural Economics, University of Tennessee); Raymond J.G.M. Florax (Department of Agricultural Economics, Purdue University); Seong-Hoon Cho (Department of Agricultural Economics, University of Tennessee)
    Abstract: This research note documents estimation procedures and results for an empirical investigation of the performance of the recently developed spatial, heteroskedasticity and autocorrelation consistent (HAC) covariance estimator calibrated with different kernel bandwidths. The empirical example is concerned with a hedonic price model for residential property values. The first bandwidth approach varies an a priori determined plug-in bandwidth criterion. The second method is a data driven cross-validation approach to determine the optimal neighborhood. The third approach uses a robust semivariogram to determine the range over which residuals are spatially correlated. Inference becomes more conservative as the plug-in bandwidth is increased. The data-driven approaches prove valuable because they are capable of identifying the optimal spatial range, which can subsequently be used to inform the choice of an appropriate bandwidth value. In our empirical example, pertaining to a standard spatial model and ditto dataset, the results of the data driven procedures can only be reconciled with relatively high plug-in values (n0.65 or n0.75). The results for the semivariogram and the cross-validation approaches are very similar which, given its computational simplicity, gives the semivariogram approach an edge over the more flexible cross-validation approach.
    Keywords: spatial HAC, semivariogram, bandwidth, hedonic model
    JEL: C13 C31 R21
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:pae:wpaper:08-10&r=ecm
  8. By: Ole E. Barndorff-Nielsen; Peter Reinhard Hansen; Asger Lunde; Neil Shephard (School of Economics and Management, University of Aarhus, Denmark)
    Abstract: We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used returns measured over 5 or 10 minutes intervals. We show the new estimator is substantially more precise.
    Keywords: HAC estimator, Long run variance estimator, Market frictions, Quadratic variation, Realised variance
    JEL: C13 C32
    Date: 2008–12–11
    URL: http://d.repec.org/n?u=RePEc:aah:create:2008-63&r=ecm
  9. By: Gengenbach Christian; Urbain Jean-Pierre; Westerlund Joakim (METEOR)
    Abstract: This paper considers a cointegrated panel data model with common factors. Starting from the triangular representation of the model as used by Bai et al. (2008) a Granger type representation theorem is derived. The conditional error correction representation is obtained, which is used as a basis for developing two new tests for the null hypothesis of noerror correction. The asymptotic distributions of the tests are shown to be free of nuisanceparameters, depending only on the number of non-stationary variables. However, the tests are not cross-sectionally independent, which makes pooling difficult. Nevertheless, the averages of the tests converge in distribution. This makes pooling possible in spite of the cross-sectional dependence. We investigate the nite sample performance of the proposed tests in a Monte Carlo experiment and compare them to the tests proposed by Westerlund (2007). We also present two empirical applications of the new tests.
    Keywords: econometrics;
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:dgr:umamet:2008051&r=ecm
  10. By: Arulampalam, Wiji (University of Warwick & IZA); Stewart, Mark B. (University of Warwick)
    Abstract: This paper presents a convenient shortcut method for implementing the Heckman estimator of the dynamic random effects probit model and other dynamic nonlinear panel data models using standard software. It then compares the estimators proposed by Heckman, Orme and Wooldridge, based on three alternative approximations, first in an empirical model for the probability of unemployment and then in a set of simulation experiments. The results indicate that none of the three estimators dominates the other two in all cases. In most cases all three estimators display satisfactory performance, except when the number of time periods is very small.
    Keywords: Dynamic discrete choice models ; initial conditions ; dynamic probit ; panel data ; dynamic nonlinear panel data models
    JEL: C23 C25 C13 C51
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:884&r=ecm
  11. By: Dinghai Xu (Department of Economics, University of Waterloo); John Knight (Department of Economics, University of Western Ontario); Tony S. Wirjanto (Department of Economics, University of Waterloo)
    Abstract: This paper extends the stochastic conditional duration model by imposing mixtures of bivariate normal distributions on the innovations of the observation and latent equations of the duration process. This extension allows the model not only to capture the asymmetric behavior of the expected duration but also to easily accommodate a richer dependence structure between the two innovations. In addition, it proposes a novel estimation methodology based on the empirical characteristic function. A set of Monte Carlo experiments as well as empirical applications based on the IBM and Boeing transaction data are provided to assess and illustrate the performance of the proposed model and the estimation method. One main empirical finding in this paper is that there is a signicantly positive "leverage effect" under both the contemporaneous and lagged inter-temporal de pendence structures for the IBM and Boeing duration data.
    Keywords: Stochastic Conditional Duration model; Leverage Effect; Discrete Mixtures of Normal; Empirical Characteristic Function
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:wat:wpaper:08007&r=ecm
  12. By: Saraswata Chaudhuri (Department of Economics, University of North Carolina Chapel Hill); Eric Zivot (Department of Economic, University of Washington)
    Abstract: Projection-based methods of inference on subsets of parameters are useful for obtaining tests that do not over-reject the true parameter values. However, they are also often criticized for being conservative. We show that the usual method of pro jection can be modifed to obtain tests that are as powerful as the conventional tests for subsets of parameters. Like the usual projection-based methods, one can always put an upper bound to the rate at which the new method over-rejects the true value of the parameters of interest. The new method is described in the context of GMM with possibly weakly identifed parameters.
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2008-26&r=ecm
  13. By: Yingyao Hu; Matthew Shum
    Abstract: In this paper we consider the nonparametric identification of Markov dynamic games models in which each firm has its own unobserved state variable, which is persistent over time. This class of models includes most models in the Ericson and Pakes (1995) and Pakes and McGuire (1994) framework. We provide conditions under which the joint Markov equilibrium process of the firms' observed and unobserved variables can be nonparametrically identified from data. For stationary continuous action games, we show that only three observations of the observed component are required to identify the equilibrium Markov process of the dynamic game. When agents?choice variables are discrete, but the unobserved state variables are continuous, four observations are required.
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:jhu:papers:546&r=ecm
  14. By: Dingan Feng (CIBC, Toronto); Peter X.-K. Song (Department of Biostatistics, University of Michigan School of Public Health); Tony S. Wirjanto (Department of Economics, University of Waterloo)
    Abstract: This paper presents a new class of time-deformation (or stochastic volatility) models for stock returns sampled in transaction time and directed by a generalized duration process. Stochastic volatility in this model is driven by an observed duration process and a latent autoregressive process. Parameter estimation in the model is carried out by using the method of simulated moments (MSM) due to its analytical feasibility and numerical stability for the proposed model. Simulations are conducted to validate the choices of the moments used in the formulation of the MSM. Both the simulation and empirical results obtained in this paper indicate that this approach works well for the proposed model. The main empirical findings for the IBM transaction return data can be summarized as follows: (i) the return distribution conditional on the duration process is not Gaussian, even though the duration process itself can marginally function as a directing process; (ii) the return process is highly leveraged; (iii) a longer trade duration tends to be associated with a higher return volatility; and (iv) the proposed model is capable of reproducing return whose marginal density function is close to that of the empirical return.
    Keywords: Duration process; Ergodicity; Method of simulated moments; Return process; Stationarity.
    JEL: G10 C51 C32
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:wat:wpaper:08010&r=ecm
  15. By: Büttner, Thomas (Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany]); Rässler, Susanne
    Abstract: "In many large data sets of economic interest, some variables, as wages, are top-coded or right-censored. In order to analyze wages with the German IAB employment sample we first have to solve the problem of censored wages at the upper limit of the social security system. We treat this problem as a missing data problem and derive new multiple imputation approaches to impute the censored wages by draws of a random variable from a truncated distribution based on Markov chain Monte Carlo techniques. In general, the variation of income is smaller in lower wage categories than in higher categories and the assumption of homoscedasticity in an imputation model is highly questionable. Therefore, we suggest a new multiple imputation method which does not presume homoscedasticity of the residuals. Finally, in a simulation study, different imputation approaches are compared under different situations and the necessity as well as the validity of the new approach is confirmed." (author's abstract, IAB-Doku) ((en))
    Keywords: Lohnhöhe, Daten, Datenaufbereitung - Methode, angewandte Statistik, mathematische Statistik, Schätzung, Markov-Ketten, Monte-Carlo-Methode, IAB-Beschäftigtenstichprobe, Westdeutschland, Bundesrepublik Deutschland
    JEL: C24 C15
    Date: 2008–12–18
    URL: http://d.repec.org/n?u=RePEc:iab:iabdpa:200844&r=ecm
  16. By: Kirstin Hubrich; Kenneth D. West
    Abstract: We propose two new procedures for comparing the mean squared prediction error (MSPE) of a benchmark model to the MSPEs of a small set of alternative models that nest the benchmark. Our procedures compare the benchmark to all the alternative models simultaneously rather than sequentially, and do not require reestimation of models as part of a bootstrap procedure. Both procedures adjust MSPE differences in accordance with Clark and West (2007); one procedure then examines the maximum t-statistic, the other computes a chi-squared statistic. Our simulations examine the proposed procedures and two existing procedures that do not adjust the MSPE differences: a chi-squared statistic, and White’s (2000) reality check. In these simulations, the two statistics that adjust MSPE differences have most accurate size, and the procedure that looks at the maximum t-statistic has best power. We illustrate our procedures by comparing forecasts of different models for U.S. inflation.
    JEL: C32 C53 E37
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14601&r=ecm
  17. By: Enzo Weber
    Abstract: In the literature of identifcation through autoregressive conditional heteroscedasticity, Weber (2008) developed the structural constant conditional correlation (SCCC) model. Besides determining linear simultaneous in uences between several variables, this model considers interaction in the structural innovations. Even though this allows for common fundamental driving forces, these cannot explain time variation in correlations of observed variables, which still have to rely on causal transmission eects. In this context, the present paper extends the analysis to structural dynamic conditional correlation (SDCC). The additional fexibility is shown to make an important contribution in the estimation of empirical real-data examples.
    Keywords: Simultaneity, Identifcation, EGARCH, DCC
    JEL: C32 G10
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2008-069&r=ecm
  18. By: Artem Prokhorov (Concordia University)
    Abstract: Optimal GMM is known to dominate Gaussian QMLE in terms of asymptotic efficiency (Chamberlain, 1984). I derive a new condition under which QMLE is as efficient as GMM for a general class of covariance structure models. The condition trivially holds for normal data but also identifies non-normal cases for which Gaussian QMLE is efficient.
    Date: 2008–05
    URL: http://d.repec.org/n?u=RePEc:crd:wpaper:08004&r=ecm
  19. By: Elke Hahn (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Frauke Skudelny (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.)
    Abstract: This paper derives forecasts for euro area real GDP growth based on a bottom up approach from the production side. That is, GDP is forecast via the forecasts of value added across the different branches of activity, which is quite new in the literature. Linear regression models in the form of bridge equations are applied. In these models earlier available monthly indicators are used to bridge the gap of missing GDP data. The process of selecting the best performing equations is accomplished as a pseudo real time forecasting exercise, i.e. due account is taken of the pattern of available monthly variables over the forecast cycle. Moreover, by applying a very systematic procedure the best performing equations are selected from a pool of thousands of test bridge equations. Our modelling approach, finally, includes a further novelty which should be of particular interest to practitioners. In practice, forecasts for a particular quarter of GDP generally spread over a prolonged period of several months. We explore whether over this forecast cycle, where GDP is repeatedly forecast, the same set of equations or different ones should be used. Changing the set of bridge equations over the forecast cycle could be superior to keeping the same set of equations, as the relative merit of the included monthly indictors may shift over time owing to differences in their data characteristics. Overall, the models derived in this forecast exercise clearly outperform the benchmark models. The variables selected in the best equations for different situations over the forecast cycle vary substantially and the achieved results confirm the conjecture that allowing the variables in the bridge equations to differ over the forecast cycle can lead to substantial improvements in the forecast accuracy. JEL Classification: C22, C52, C53, E27.
    Keywords: Forecasting, bridge equations, euro area, GDP, bottom up approach.
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20080975&r=ecm
  20. By: Dinghai Xu (Department of Economics, University of Waterloo); Tony S. Wirjanto (Department of Economics, University of Waterloo)
    Abstract: This paper considers Value at Risk measures constructed under a discrete mixture of normal distribution on the innovations with time-varying volatility, or MN-GARCH, model. We adopt an approach based on the continuous empirical characteristic function to estimate the param eters of the model using several daily foreign exchange rates' return data. This approach has several advantages as a method for estimating the MN-GARCH model. In particular, under certain weighting measures, a closed form objective distance function for estimation is obtained. This reduces the computational burden considerably. In addition, the characteristic function, unlike its likelihood function counterpart, is always uniformly bounded over parameter space due to the Fourier transformation. To evaluate the VaR estimates obtained from alternative specifications, we construct several measures, such as the number of violations, the average size of violations, the sum square of violations and the expected size of violations. Based on these measures, we find that the VaR measures obtained from the MN-GARCH model outperform those obtained from other competing models.
    Keywords: Value at Risk; Mixture of Normals; GARCH; Characteristic Function.
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:wat:wpaper:08008&r=ecm
  21. By: Ghislain Yanou (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I)
    Abstract: In this paper, we propose a methodology for building an estimator of the covariance matrix. We use a robust measure of moments called L-moments (see hosking, 1986), and their extension into a multivariate framework (see Serfling and Xiao, 2007). Random matrix theory (see Edelman, 1989) allows us to extract factors which contain real information. An empirical study in the American market shows that the Global Minimum L-variance Portfolio (GMKLP) obtained from our estimator well performs the Global Minimum Variance Portfolio (GMVP) that acquired from the empirical estimator of the covariance matrix.
    Keywords: Covariance Matrix, L-variance-covariance, L-correlation, concomitance, random matrix theory.
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:hal:journl:halshs-00349205_v1&r=ecm
  22. By: Knut Are Aastveit (Norges Bank (Central Bank of Norway)and The University of Oslo); Tørres G. Trovik (Norges Bank (Central Bank of Norway)and The World Bank)
    Abstract: An approximate dynamic factor model can substantially improve the reliability of real time output gap estimates. The model extracts a common component from macroeconomic indicators, which reduces errors in the gap due to data revisions. The model's ability to handle the unbalanced arrival of data, also yields favorable nowcasting properties and thus starting conditions for the filtering of data into trend and deviations from trend. Combined with the method of augmenting data with forecasts prior to filtering, this greatly reduces the end-of-sample imprecision in the gap estimate. The increased precision has economic significance for real time policy decisions.
    Keywords: Output gap, Real time analysis, Monetary policy, Forecasting, Factor model
    JEL: C33 C53 E52 E58
    Date: 2008–12–12
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2008_23&r=ecm
  23. By: Bo E. Honore (Department of Economics, Princeton University); Aureo de Paula (Department of Economics, University of Pennsylvania)
    Abstract: This paper studies the identification of a simultaneous equation model involving duration measures. It proposes a game theoretic model in which durations are determined by strategic agents. In the absence of strategic motives, the model delivers a version of the generalized accelerated failure time model. In its most general form, the system resembles a classical simultaneous equation model in which endogenous variables interact with observable and unobservable exogenous components to characterize a certain economic environment. In this paper, the endogenous variables are the individually chosen equilibrium durations. Even though a unique solution to the game is not always attainable in this context, the structural elements of the economic system are shown to be semiparametrically point identified. We also present a brief discussion of estimation ideas and a set of simulation studies on the model.
    Keywords: Keywords: duration, empirical games, identification
    JEL: C10 C30 C41
    Date: 2007–04–24
    URL: http://d.repec.org/n?u=RePEc:pen:papers:08-044&r=ecm
  24. By: Xun Tang (Department of Economics, University of Pennsylvania)
    Abstract: In first-price auctions with interdependent bidder values, the distributions of private signals and values cannot be uniquely recovered from bids in Bayesian Nash equilibria. Non-identification invalidates structural analyses that rely on the exact knowledge of model primitives. In this paper I introduce tight, informative bounds on the distribution of revenues in counterfactual first-price and second-price auctions with binding reserve prices. These robust bounds are identified from distributions of equilibrium bids in first-price auctions under minimal restrictions where I allow for affiliated signals and both private and common-value paradigms. The bounds can be used to compare auction formats and to select optimal reserve prices. I propose consistent nonparametric estimators of the bounds. I extend the approach to account for observed heterogeneity across auctions, as well as binding reserve prices in the data. I use a recent data of 6,721 first-price auctions of U.S. municipal bonds to estimate bounds on counterfactual revenue distributions. I then bound optimal reserve prices for sellers with various risk attitudes.
    Keywords: Empirical auctions, interdependent values, affiliated signals, partial identification, bounds, counter factual revenues, nonparametric estimation, municipal bonds
    JEL: C14 C51 C81 D44
    Date: 2008–09–02
    URL: http://d.repec.org/n?u=RePEc:pen:papers:08-042&r=ecm
  25. By: Adam Cagliarini (Reserve Bank of Australia); Mariano Kulish (Reserve Bank of Australia)
    Abstract: Standard solution methods for linear stochastic models with rational expectations presuppose a time-invariant structure as well as an environment in which shocks are unanticipated. Consequently, credible announcements that entail future changes of the structure cannot be handled by standard solution methods. This paper develops the solution for linear stochastic rational expectations models in the face of a finite sequence of anticipated structural changes. These events encompass anticipated changes to the structural parameters and anticipated additive shocks. We apply the solution technique to some examples of practical relevance to monetary policy.
    Keywords: structural change; anticipated shocks; rational expectations
    JEL: C63 E17 E47
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:rba:rbardp:rdp2008-10&r=ecm
  26. By: Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Pierre-André Maugis (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I)
    Abstract: We present here a new way of building vine copulas that allows us to create a vast number of new vine copulas, allowing for more precise modeling in high dimensions. To deal with this great number of copulas we present a new efficient selection methodology using a lattice structure on the vine set. Our model allows for a lot of degrees of freedom, but further improvements face numerous problems caused by vines' complexity as an estimator in a statistical and computational way, problems that we will expose in this paper. Robust n-variate models would be a great breakthrough for asset risk management in banks and insurance companies.
    Keywords: Vines, multivariate copulas, model selection.
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:hal:journl:halshs-00348884_v1&r=ecm
  27. By: Francis Vella (Georgetown University); Lídia Farré (Universidad de Alicante); Roger Klein (Rutgers University)
    Abstract: We estimate the return to education using a sample drawn from the National Longitudinal Survey of Youth 1979 (NLSY79). Rather than accounting for the endogeneity of schooling through the use of instrumental variables we employ a parametric version of the Klein and Vella (2006a) estimator. This estimator bypasses the need for instruments by exploiting features of the conditional second moments of the errors. As the Klein and Vella (2006a) procedure is semi-parametric it is computationally demanding. We illustrate how to greatly reduce the required computation by parameterizing the second moments. Accounting for endogeneity increases the estimate of the return to education by 5 percentage points, from 7.6% to 12.7%.
    Keywords: return to education, heteroskedasticity, endogeneity
    JEL: J31 C31
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:ivi:wpasad:2008-16&r=ecm
  28. By: Theall, Katherine P.; Scribner, Richard; Lynch , Sara; Simonsen, Neal; Schonlau, Matthias; Carlin, Bradley; Cohen, Deborah
    Abstract: Objective: Although there is a growing body of literature on sample size in multilevel or hierarchical modeling, few studies have examined the impact of group size < 5. Design: We examined the impact of a group size less than five on both a continuous and dichotomous outcome in a simple two-level multilevel model utilizing data from two studies. Setting: Models with balanced and unbalanced data of group sizes 2 to 5 were compared to models with complete data. Impact on both fixed and random components were examined. Results: Random components, particularly group-level variance estimates, were more affected by small group size than were fixed components. Both fixed and random standard error estimates were inflated with small group size. Datasets where there are a large number of groups yet all the groups are of very small size may fail to find or even consider a group-level effect when one may exist and also may be under-powered to detect fixed effects. Conclusions: Researchers working with multilevel study designs should be aware of the potential impact of small group size when a large proportion of groups has very small (< 5) sample sizes.
    Keywords: Multilevel; Neighborhood; Body Weight; Obesity; Sample Size
    JEL: C10 I12 I18
    Date: 2008–07–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:11648&r=ecm
  29. By: Bretteville-Jensen, Anne Line (Norwegian Institute for Alcohol and Drug Research (SIRUS)); Jacobi, Liana (University of Melbourne)
    Abstract: Empirical studies have found that cannabis commonly precedes consumption of drugs like amphetamine, ecstasy, cocaine and heroin. As a result a causal linkage between cannabis and subsequent hard drug use has been hypothesized. Despite mixed empirical evidence and a limited understanding of possible transmission mechanisms, the causal gateway hypothesis has been influential in formulating a strict drug policy in many western countries. Individual differences in proneness and accessibility, however, provide alternative, non-causal explanations for the observed "staircase" pattern and yield potentially different policy implications. We propose a Bayesian estimation and predictive framework to analyze the effects and relative importance of previous cannabis use, proneness and accessibility factors on hard drug initiation and to explore potential policy implications, using data from a unique recent survey of young adults in Norway. Motivated by the gateway transmission channels proposed in the literature, our model allows for a constant and a heterogeneous effect of previous cannabis use on hard drug initiation and, also, a more flexible correlation pattern for the unobservables. We find that proneness, accessibility and previous cannabis use contribute to the observed higher drug use pattern among cannabis users. The latter has the largest effect and is driven by various transmission channels.
    Keywords: accessibility, Bayesian prior-posterior analysis, Bayesian predictive analysis, cannabis gateway, cannabis use, hard drug use, Markov Chain Monte Carlo, policy, proneness
    JEL: C11 C35 D12 I19
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp3879&r=ecm
  30. By: Mario Cerrato; Hyunsok Kim; Ronald MacDonald
    Abstract: The breakdown of the Bretton Woods system and the adoption of generalised floating exchange rates ushered in a new era of exchange rate volatility and uncer­tainty. This increased volatility lead economists to search or economic models able to describe observed exchange rate behavior. In the present paper we propose more general STAR transition functions which encompass both threshold nonlinearity and asymmetric effects. Our framework allows for a gradual adjustment from one regime to another, and considers threshold effects by encompassing other existing models, such as TAR models. We apply our methodology to two different exchange rate data-sets, one for developing countries, and official nominal exchange rates, and the second for emerging market economies using black market exchange rates.
    Keywords: unit root tests, threshold autoregressive models, purchasing power parity.
    JEL: C22 F31
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:gla:glaewp:2008_33&r=ecm
  31. By: Thilo Moseler (Universität Konstanz); Christian Bender
    Abstract: In this paper we explain how the importance sampling technique can be generalized from simulating expectations to computing the initial value of backward SDEs with Lipschitz continuous driver. By means of a measure transformation we introduce a variance reduced version of the forward approximation scheme by Bender and Denk [4] for simulating backward SDEs. A fully implementable algorithm using the least-squares Monte Carlo approach is developed and its convergence is proved. The success of the generalized importance sampling is illustrated by numerical examples in the context of Asian option pricing under di®erent interest rates for borrowing and lending.
    Date: 2008–09–01
    URL: http://d.repec.org/n?u=RePEc:knz:cofedp:0811&r=ecm

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.