nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒05‒02
twenty-two papers chosen by
Sune Karlsson
Orebro University

  1. Bias Reduction of Long Memory Parameter Estimators via the Pre-filtered Sieve Bootstrap By D.S. Poskitt; Gael M. Martin; Simone D. Grose
  2. Consistent Estimation of the “True” Fixed-effects Stochastic Frontier Model By Federico Belotti; Giuseppe Ilardi
  3. Goodness-of-fit testing for fractional diffusions By Mark Podolskij; Katrin Wasmuth
  4. A Class of Adaptive Importance Sampling Weighted EM Algorithms for Efficient and Robust Posterior and Predictive Simulation By Lennart Hoogerheide; Anne Opschoor; Herman K. van Dijk
  5. Higher Order Improvements of the Sieve Bootstrap for Fractionally Integrated Processes By D.S. Poskitt; Simone D. Grose; Gael M. Martin
  6. Unit roots, nonlinearities and structural breaks By Niels Haldrup; Robinson Kruse; Timo Teräsvirta; Rasmus T. Varneskov
  7. Parametric Inference and Dynamic State Recovery from Option Panels By Torben G. Andersen; Nicola Fusari; Viktor Todorov
  8. Estimating a Semiparametric Asymmetric Stochastic Volatility Model with a Dirichlet Process Mixture By Mark J Jensen; John M Maheu
  9. Econometric analysis of multiariate realised QML: efficient positive semi-definite estimators of the covariation of equity prices By Neil Shephard; Dacheng Xiu
  10. Model Selection in Kernel Ridge Regression By Peter Exterkate
  11. Nonparametric prediction of stock returns guided by prior knowledge By Michael Scholz; Jens Perch Nielsen; Stefan Sperlich
  12. VAR Modeling and Business Cycle Analysis: A Taxonomy of Errors By D.S. Poskitt; Wenying Yao
  13. Alternative Methodology for Turning-Point Detection in Business Cycle : A Wavelet Approach. By Peter Martey Addo; Monica Billio; Dominique Guegan
  14. Measuring Test Measurement Error: A General Approach By Donald Boyd; Hamilton Lankford; Susanna Loeb; James Wyckoff
  15. Forecasting by factors, by variables, or both? By Jennifer L. Castle; David F. Hendry; Michael P. clements
  16. Econometric modeling of exchange rate volatility and jumps By Deniz Erdemlioglu; Sébastien Laurent; Christopher J. Neely
  17. Inference of Bidders’ Risk Attitudes in Ascending Auctions with Endogenous Entry, Second Version By Hanming Fang; Xun Tang
  18. Record Statistics for Multiple Random Walks By Gregor Wergen; Satya N. Majumdar; Gregory Schehr
  19. Modelling electricity day–ahead prices by multivariate Lévy semistationary processes By Almut E. D. Veraart; Luitgard A. M. Veraart
  20. Inferred vs Stated Attribute Non-Attendance in Choice Experiments: A Study of Doctors' Prescription Behaviour By Arne Risa Hole; Julie Riise Kolstad; Dorte Gyrd-Hansen
  21. Ziliak and McClosky’s Criticisms of Significance Tests: A Damage Assessment By Thomas Mayer
  22. Model uncertainty, state uncertainty, and state-space models By Yulei Luo; Jun Nie; Eric R. Young

  1. By: D.S. Poskitt; Gael M. Martin; Simone D. Grose
    Abstract: This paper investigates the use of bootstrap-based bias correction of semi-parametric estimators of the long memory parameter in fractionally integrated processes. The re-sampling method involves the application of the sieve bootstrap to data pre-filtered by a preliminary semi-parametric estimate of the long memory parameter. Theoretical justification for using the bootstrap techniques to bias adjust log periodogram and semi-parametric local Whittle estimators of the memory parameter is provided. Simulation evidence comparing the performance of the bootstrap bias correction with analytical bias correction techniques is also presented. The bootstrap method is shown to produce notable bias reductions, in particular when applied to an estimator for which analytical adjustments have already been used. The empirical coverage of confidence intervals based on the bias-adjusted estimators is very close to the nominal, for a reasonably large sample size, more so than for the comparable analytically adjusted estimators. The precision of inferences (as measured by interval length) is also greater when the bootstrap is used to bias correct rather than analytical adjustments.
    Keywords: Analytical bias correction, bootstrap bias correction, confidence interval, coverage, precision, log periodogram estimator, local Whittle estimator.
    JEL: C18 C22 C52
    Date: 2012–04
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2012-8&r=ecm
  2. By: Federico Belotti (Faculty of Economics, University of Rome "Tor Vergata"); Giuseppe Ilardi (Economic and Financial Statistics Department, Bank of Italy.)
    Abstract: The classic stochastic frontier panel data models provide no mechanism to disentangle individual time invariant unobserved heterogeneity from inefficiency. Greene (2005a,b) proposed a fixed-effects model specification that distinguishes these two latent components and allows a time varying inefficiency distribution. However, the maximum likelihood estimator proposed by Greene leads to biased inefficiency estimates due to the incidental parameters problem. In this paper, we propose two alternative estimation procedures that, by relying on a first difference data transformation, achieve consistency for n goes to infinity with fixed T. Evidence from Monte Carlo simulations shows good finite sample performances of both approaches even in presence of small samples.
    Keywords: Stochastic frontiers, Fixed-effects, Panel data, Marginal simulated likelihood, Pairwise differencing
    JEL: C13 C16 C23
    Date: 2012–04–18
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:231&r=ecm
  3. By: Mark Podolskij (Heidelberg University and CREATES); Katrin Wasmuth (Heidelberg University)
    Abstract: This paper presents a goodness-of-fit test for the volatility function of a SDE driven by a Gaussian process with stationary and centered increments. Under rather weak assumptions on the Gaussian process, we provide a procedure for testing whether the unknown volatility function lies in a given linear functional space or not. This testing problem is highly non-trivial, because the volatility function is not identifiable in our model. The underlying fractional diffusion is assumed to be observed at high frequency on a fixed time interval and the test statistic is based on weighted power variations. Our test statistic is consistent against any fixed alternative.
    Keywords: central limit theorem, goodness-of-fit tests, high frequency observations, fractional diffusions, stable convergence.
    JEL: C10 C13 C14
    Date: 2012–04–16
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-13&r=ecm
  4. By: Lennart Hoogerheide (VU University Amsterdam); Anne Opschoor (Erasmus University Amsterdam); Herman K. van Dijk (Erasmus University Rotterdam, and VU University Amsterdam)
    Abstract: A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-<I>t</I> densities that approximates accurately the target distribution - typically a posterior distribution, of which we only require a kernel - in the sense that the Kullback-Leibler divergence between target and mixture is minimized. We label this approach <I>Mixture of t by Importance Sampling and Expectation Maximization</I> (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis-Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a <I>sequential</I> manner. Second, we introduce a <I>permutation-augmented</I> MitISEM approach. Third, we propose a <I>partial</I> MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model.
    Keywords: mixture of Student-t distributions; importance sampling; Kullback-Leibler divergence; Expectation Maximization; Metropolis-Hastings algorithm; predictive likelihood; DCC GARCH; mixture GARCH; instrumental variables
    JEL: C11 C22 C26
    Date: 2012–03–23
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20120026&r=ecm
  5. By: D.S. Poskitt; Simone D. Grose; Gael M. Martin
    Abstract: This paper investigates the accuracy of bootstrap-based inference in the case of long memory fractionally integrated processes. The re-sampling method is based on the semi-parametric sieve approach, whereby the dynamics in the process used to produce the bootstrap draws are captured by an autoregressive approximation. Application of the sieve method to data pre-filtered by a semi-parametric estimate of the long memory parameter is also explored. Higher-order improvements yielded by both forms of re-sampling are demonstrated using Edgeworth expansions for a broad class of linear statistics. The methods are then applied to the problem of estimating the sampling distribution of the sample mean under long memory, in an experimental setting. The pre-filtered version of the bootstrap is shown to avoid the distinct underestimation of the sampling variance of the mean which the raw sieve method demonstrates in finite samples, higher order accuracy of the latter notwithstanding.
    Keywords: Bias, bootstrap-based inference, Edgeworth expansion, pre-filtered sieve bootstrap, sampling distribution.
    JEL: C18 C22 C52
    Date: 2012–04
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2012-9&r=ecm
  6. By: Niels Haldrup (Aarhus University and CREATES); Robinson Kruse (Leibniz Universität Hannover and CREATES); Timo Teräsvirta (Aarhus University and CREATES); Rasmus T. Varneskov (Aarhus University and CREATES)
    Abstract: One of the most infl?uential research ?fields in econometrics over the past decades concerns unit root testing in economic time series. In macro-economics much of the interest in the area originate from the fact that when unit roots are present, then shocks to the time series processes have a persistent effect with resulting policy implications. From a statistical perspective on the other hand, the presence of unit roots has dramatic implications for econometric model building, estimation, and inference in order to avoid the so-called spurious regression problem. The present paper provides a selective review of contributions to the fi?eld of unit root testing over the past three decades. We discuss the nature of stochastic and deterministic trend processes, including break processes, that are likely to affect unit root inference. A range of the most popular unit root tests are presented and their modi?cations to situations with breaks are discussed. We also review some results on unit root testing within the framework of non-linear processes.
    Keywords: Unit roots, nonlinearity, structural breaks.
    JEL: C2 C22
    Date: 2012–04–18
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-14&r=ecm
  7. By: Torben G. Andersen (Northwestern University, NBER, and CREATES); Nicola Fusari (Northwestern University); Viktor Todorov (Northwestern University)
    Abstract: We develop a new parametric estimation procedure for option panels observed with error which relies on asymptotic approximations assuming an ever increasing set of observed option prices in the moneyness-maturity (cross-sectional) dimension, but with a fixed time span. We develop consistent estimators of the parameter vector and the dynamic realization of the state vector that governs the option price dynamics. The estimators converge stably to a mixed-Gaussian law and we develop feasible estimators for the limiting variance. We provide semiparametric tests for the option price dynamics based on the distance between the spot volatility extracted from the options and the one obtained nonparametrically from high-frequency data on the underlying asset. We further construct new formal tests of the model t for specific regions of the volatility surface and for the stability of the risk-neutral dynamics over a given period of time. A large-scale Monte Carlo study indicates that the inference procedures work well for empirically realistic model specifications and sample sizes. In an empirical application to S&P 500 index options we extend the popular double-jump stochastic volatility model to allow for time-varying risk premia of extreme events, i.e., jumps, as well as a more exible relation between the risk premia and the level of risk. We show that both extensions provide a significantly improved characterization, both statistically and economically, of observed option prices.
    Keywords: Option Pricing, Inference, Risk Premia, Jumps, Latent State Vector, Stochastic Volatility, Specification Testing, Stable Convergence.
    JEL: C51 C52 G12
    Date: 2011–05–29
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-11&r=ecm
  8. By: Mark J Jensen; John M Maheu
    Abstract: In this paper we extend the parametric, asymmetric, stochastic volatility model (ASV), where returns are correlated with volatility, by flexibly modeling the bivariate distribution of the return and volatility innovations nonparametrically. Its novelty is in modeling the joint, conditional, return-volatility, distribution with a infinite mixture of bivariate Normal distributions with mean zero vectors, but having unknown mixture weights and covariance matrices. This semiparametric ASV model nests stochastic volatility models whose innovations are distributed as either Normal or Student-t distributions, plus the response in volatility to unexpected return shocks is more general than the fixed asymmetric response with the ASV model. The unknown mixture parameters are modeled with a Dirichlet Process prior. This prior ensures a parsimonious, finite, posterior, mixture that bests represents the distribution of the innovations and a straightforward sampler of the conditional posteriors. We develop a Bayesian Markov chain Monte Carlo sampler to fully characterize the parametric and distributional uncertainty. Nested model comparisons and out-of-sample predictions with the cumulative marginal-likelihoods, and one-day-ahead, predictive log-Bayes factors between the semiparametric and parametric versions of the ASV model shows the semiparametric model forecasting more accurate empirical market returns. A major reason is how volatility responds to an unexpected market movement. When the market is tranquil, expected volatility reacts to a negative (positive) price shock by rising (initially declining, but then rising when the positive shock is large). However, when the market is volatile, the degree of asymmetry and the size of the response in expected volatility is muted. In other words, when times are good, no news is good news, but when times are bad, neither good nor bad news matters with regards to volatility.
    Keywords: Bayesian nonparametrics, cumulative Bayes factor, Dirichlet process mixture, inï¬nite mixture model, leverage effect, marginal likelihood, MCMC, non-normal, stochastic volatility, volatility-return relationship
    JEL: C11 C14 C53 C58
    Date: 2012–04–20
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-453&r=ecm
  9. By: Neil Shephard; Dacheng Xiu
    Abstract: Estimating the covariance and correlation between assets using high frequency data is challenging due to market microstructure effects and Epps effects. In this paper we extend Xiu’s univariate QML approach to the multivariate case, carrying out inference as if the observations arise from an asynchronously observed vector scaled Brownian model observed with error. Under stochastic volatility the resulting QML estimator is positive semi-definite, uses all available data, is consistent and asymptotically mixed normal. The quasi-likelihood is computed using a Kalman filter and optimised using a relatively simple EM algorithm which scales well with the number of assets. We derive the theoretical properties of the estimator and prove that it achieves the efficient rate of convergence. We show how to make it achieve the non-parametric efficiency bound for this problem. The estimator is also analysed using Monte Carlo methods and applied on equity data that are distinct in their levels of liquidity.
    Keywords: EM algorithm, Kalman filter, Market microstructure noise, Non-synchronous data, Portfolio optimisation, Quadratic variation, Quasi-likelihood, Semimartingale, Volatility
    JEL: C14 C58 D53 D81
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:604&r=ecm
  10. By: Peter Exterkate (Department of Economics and CREATES Aarhus University)
    Abstract: Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely applicable, and we recommend their use instead of the popular polynomial kernels in general settings, in which no information on the data-generating process is available.
    Keywords: Nonlinear forecasting, shrinkage estimation, kernel methods, high dimensionality
    JEL: C51 C53 C63
    Date: 2012–02–28
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-10&r=ecm
  11. By: Michael Scholz (Karl-Franzens University Graz); Jens Perch Nielsen (Cass Business School); Stefan Sperlich (Universitie de Geneve)
    Abstract: One of the most studied questions in economics and finance is whether equity returns or premiums can be predicted by empirical models. While many authors favor the historical mean or other simple parametric methods, this article focuses on nonlinear relationships. A straightforward bootstrap-test confirms that non- and semiparametric techniques help to obtain better forecasts. It is demonstrated how economic theory directly guides a model in an innovative way. The inclusion of prior knowledge enables for American data a further notable improvement in the prediction of excess stock returns of 35% compared to the fully nonparametric model, as measured by the more complex validated R2 as well as using classical out-of-sample validation. Statistically, a bias and dimension reduction method is proposed to import more structure in the estimation process as an adequate way to circumvent the curse of dimensionality.
    Keywords: Prediction of Stock Returns, Cross-Validation, Prior Knowledge, Bias Reduction, Dimension Reduction
    JEL: C14 C53 G17
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:grz:wpaper:2012-02&r=ecm
  12. By: D.S. Poskitt; Wenying Yao
    Abstract: In this article we investigate the theoretical behaviour of finite lag VAR(n) models fitted to time series that in truth come from an infinite order VAR(?) data generating mechanism. We show that overall error can be broken down into two basic components, an estimation error that stems from the difference between the parameter estimates and their population ensemble VAR(n) counterparts, and an approximation error that stems from the difference between the VAR(n) and the true VAR(?). The two sources of error are shown to be present in other performance indicators previously employed in the literature to characterize, so called, truncation effects. Our theoretical analysis indicates that the magnitude of the estimation error exceeds that of the approximation error, but experimental results based upon a prototypical real business cycle model indicate that in practice the approximation error approaches its asymptotic position far more slowly than does the estimation error, their relative orders of magnitude notwithstanding. The experimental results suggest that with sample sizes and lag lengths like those commonly employed in practice VAR(n) models are likely to exhibit serious errors of both types when attempting to replicate the dynamics of the true underlying process and that inferences based on VAR(n) models can be very untrustworthy.
    Keywords: VAR, estimation error, approximation error, RBC model
    JEL: C18 C32 C52 C54 E37
    Date: 2012–04–19
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2012-11&r=ecm
  13. By: Peter Martey Addo (Centre d'Economie de la Sorbonne); Monica Billio (Ca' Foscari university - Department of Economics); Dominique Guegan (Centre d'Economie de la Sorbonne - Paris School of Economics)
    Abstract: We provide a signal modality analysis to characterize and detect nonlinearity schemes in the US Industrial Production Index time series. The analysis is achieved by using the recently proposed "delay vector variance" (DVV) method, which examines local predictability of a signal in the phase space to detect the presence of determinism and nonlinearity in a time series. Optimal embedding parameters used in the DVV analysis are obtained via a differential entropy based method using wavelet-based surrogates. A complex Morlet wavelet is employed to detect and characterize the US business cycle. A comprehensive analysis of the feasibility of this approach is provided. Our results coincide with the business cycles peaks and troughs dates published by the National Bureau of Economic Research (NBER).
    Keywords: Nonlinearity analysis, surrogates, Delay Vector Variance (DVV) method, wavelets, business cycle, embedding parameters.
    JEL: C14 C22 C40 E32
    Date: 2012–04
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:12023&r=ecm
  14. By: Donald Boyd; Hamilton Lankford; Susanna Loeb; James Wyckoff
    Abstract: Test-based accountability including value-added assessments and experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet we know little regarding important properties of these tests, an important example being the extent of test measurement error and its implications for educational policy and practice. While test vendors provide estimates of split-test reliability, these measures do not account for potentially important day-to-day differences in student performance. We show there is a credible, low-cost approach for estimating the total test measurement error that can be applied when one or more cohorts of students take three or more tests in the subject of interest (e.g., state assessments in three consecutive grades). Our method generalizes the test-retest framework allowing for either growth or decay in knowledge and skills between tests as well as variation in the degree of measurement error across tests. The approach maintains relatively unrestrictive, testable assumptions regarding the structure of student achievement growth. Estimation only requires descriptive statistics (e.g., correlations) for the tests. When student-level test-score data are available, the extent and pattern of measurement error heteroskedasticity also can be estimated. Utilizing math and ELA test data from New York City, we estimate the overall extent of test measurement error is more than twice as large as that reported by the test vendor and demonstrate how using estimates of the total measurement error and the degree of heteroskedasticity along with observed scores can yield meaningful improvements in the precision of student achievement and achievement-gain estimates.
    JEL: I21
    Date: 2012–04
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:18010&r=ecm
  15. By: Jennifer L. Castle; David F. Hendry; Michael P. clements
    Abstract: We consider forecasting with factors, variables and both, modeling in-sample using Autometrics so all principal components and variables can be included jointly, while tackling multiple breaks by impulse-indicator saturation. A forecast-error taxonomy for factor models highlights the impacts of location shifts on forecast-error biases. Forecasting US GDP over 1-, 4- and 8-step horizons using the dataset from Stock and Watson (2009) updated to 2011:2 shows factor models are more useful for nowcasting or short-term forecasting, but their relative performance declines as the forecast horizon increases. Forecasts for GDP levels highlight the need for robust strategies such as intercept corrections or differencing when location shifts occur, as in the recent financial crisis.
    Keywords: Model selection, Factor models, Forecasting, Impulse-indicator saturation, Autometrics
    JEL: C51 C22
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:600&r=ecm
  16. By: Deniz Erdemlioglu; Sébastien Laurent; Christopher J. Neely
    Abstract: This chapter reviews the rapid advances in foreign exchange volatility modeling made in the last three decades. Academic researchers have sought to fit the three major characteristics of foreign exchange volatility: intraday periodicity, autocorrelation and discontinuities in prices. Early research modeled the autocorrelation in daily and weekly squared foreign exchange returns with ARCH/GARCH models. Increased computing power and availability of high-frequency data allowed later researchers to improve volatility and jumps estimates. Researchers also found it useful to incorporate information about periodic volatility patterns and macroeconomic announcements in their calculations. This article details these volatility and jump estimation methods, compares those methods empirically and provides some suggestions for further research.
    Keywords: Foreign exchange ; Time-series analysis
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2012-008&r=ecm
  17. By: Hanming Fang (Department of Economics, University of Pennsylvania); Xun Tang (Department of Economics, University of Pennsylvania)
    Abstract: Bidders’ risk attitudes have important implications for sellers seeking to maximize expected revenues. In ascending auctions, auction theory predicts bid distributions in Bayesian Nash equilibrium does not convey any information about bidders' risk preference. We propose a new approach for inference of bidders’ risk attitudes when they make endogenous participation decisions. Our approach is based on the idea that bidders' risk premium - the difference between ex ante expected profits from entry and the certainty equivalent - required for entry into the auction is strictly positive if and only if bidders are risk averse. We show bidders' expected profits from entry into auctions is nonparametrically recoverable, if a researcher observes the distribution of transaction prices, bidders' entry decisions and some noisy measures of entry costs. We propose a nonparametric test which attains the correct level asymptotically under the null of risk-neutrality, and is consistent under fixed alternatives. We provide Monte Carlo evidence of the finite sample performance of the test. We also establish identification of risk attitudes in more general auction models, where in the entry stage bidders receive signals that are correlated with private values to be drawn in the bidding stage.
    Keywords: Ascending auctions, Risk attitudes, Endogenous entry, Nonparametric Test, Bootstrap
    JEL: D44 C12 C14
    Date: 2011–05–28
    URL: http://d.repec.org/n?u=RePEc:pen:papers:12-016&r=ecm
  18. By: Gregor Wergen; Satya N. Majumdar; Gregory Schehr
    Abstract: We study the statistics of the number of records R_{n,N} for N identical and independent symmetric discrete-time random walks of n steps in one dimension, all starting at the origin at step 0. At each time step, each walker jumps by a random length drawn independently from a symmetric and continuous distribution. We consider two cases: (I) when the variance \sigma^2 of the jump distribution is finite and (II) when \sigma^2 is divergent as in the case of L\'evy flights with index 0 < \mu < 2. In both cases we find that the mean record number <R_{n,N}> grows universally as \sim \alpha_N \sqrt{n} for large n, but with a very different behavior of the amplitude \alpha_N for N > 1 in the two cases. We find that for large N, \alpha_N \approx 2 \sqrt{\log N} independently of \sigma^2 in case I. In contrast, in case II, the amplitude approaches to an N-independent constant for large N, \alpha_N \approx 4/\sqrt{\pi}, independently of 0<\mu<2. For finite \sigma^2 we argue, and this is confirmed by our numerical simulations, that the full distribution of (R_{n,N}/\sqrt{n} - 2 \sqrt{\log N}) \sqrt{\log N} converges to a Gumbel law as n \to \infty and N \to \infty. In case II, our numerical simulations indicate that the distribution of R_{n,N}/\sqrt{n} converges, for n \to \infty and N \to \infty, to a universal nontrivial distribution, independently of \mu. We discuss the applications of our results to the study of the record statistics of 366 daily stock prices from the Standard & Poors 500 index.
    Date: 2012–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1204.5039&r=ecm
  19. By: Almut E. D. Veraart (Imperial College London and CREATES); Luitgard A. M. Veraart (London School of Economics)
    Abstract: This paper presents a new modelling framework for day–ahead electricity prices based on multivariate Lévy semistationary (MLSS) processes. Day–ahead prices specify the prices for electricity delivered over certain time windows on the next day and are determined in a daily auction. Since there are several delivery periods per day, we use a multivariate model to describe the different day–ahead prices for the different delivery periods on the next day. We extend the work by Barndorff-Nielsen et al. (2010) on univariate Lévy semistationary processes to a multivariate setting and discuss the probabilistic properties of the new class of stochastic processes. Furthermore, we provide a detailed empirical study using data from the European Energy Exchange (EEX) and give new insights into the intra–daily correlation structure of electricity day–ahead prices in the EEX market. The flexible structure of MLSS processes is able to reproduce the stylized facts of such data rather well. Furthermore, these processes can be used to model negative prices in electricity markets which started to occur recently and cannot be described by many classical models.
    Keywords: Electricity market, day–ahead prices, multivariate Lévy semistationary process, stochastic volatility, correlation, panel structure.
    JEL: C0 C1 C5 G1
    Date: 2012–03–30
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-12&r=ecm
  20. By: Arne Risa Hole (Department of Economics, The University of Sheffield); Julie Riise Kolstad (UNI Rokkan Centre, University of Bergen); Dorte Gyrd-Hansen (Health Economics Research Unit, University of Southern Denmark)
    Abstract: It is increasingly recognised that respondents to choice experiments employ heuristics such as attribute non-attendance (ANA) to simplify the choice tasks. This paper develops an econometric model which incorporates preference heterogeneity among respondents with different attribute processing strategies and allows the ANA probabilities to depend on the respondents' stated non-attendance. We find evidence that stated ANA is a useful indicator of the prevalence of nonattendance in the data. Contrary to previous papers in the literature we find that willingness to pay estimates derived from models which account for ANA are similar to the standard logit estimates.
    Keywords: choice experiment; attribute non-attendance
    JEL: C25 I10
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:shf:wpaper:2012010&r=ecm
  21. By: Thomas Mayer (Department of Economics, University of California Davis)
    Abstract: D. N. McCloskey and Stephen Ziliak have criticized economists and others for confounding statistical and substantive significance, and for committing the logical error of the transposed conditional. In doing so they sometimes misinterpret the function of significance tests. Nonetheless, economists sometimes make both of these errors – but not nearly as often as Ziliak and McCloskey claim. They also argue –incorrectly – that the existence of an effect, which is what significance tests are about, is not a scientific question. Their complaint that in testing significance economists often do not take the loss function into account is unfounded. But they are right in arguing that confidence intervals should be presented more frequently.
    Keywords: Significance tests, ts, confidence intervals, Zilliak, McCloskey, oomph
    JEL: C12 B4
    Date: 2012–04–20
    URL: http://d.repec.org/n?u=RePEc:cda:wpaper:12-6&r=ecm
  22. By: Yulei Luo; Jun Nie; Eric R. Young
    Abstract: This technical paper considers ways to capture uncertainty in the context of so-called "state-space" models. ; State-space models are powerful tools commonly used in macroeconomics, international economics, and finance. State-space models can generate estimates of an underlying, ultimately unobserved variable—such as the natural rate of unemployment—based on the movements of other variables that are observed and have some relationship to the unobserved variable. The paper shows how several macroeconomic models can be mapped to the state-space framework, thus helping quantify uncertainty about the true model (model uncertainty) or about the amount of information available when decisions are made (state uncertainty).
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp12-02&r=ecm

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.