nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒03‒23
twenty-two papers chosen by
Sune Karlsson
Orebro University

  1. Semiparametric Specification Testing of Nonlinear Models By Miguel A. Delgado; Thanasis Stengos
  2. Fractional cointegration rank estimation By Katarzyna Lasak; Carlos Velasco
  3. Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns By Sílvia Gonçalves; Ulrich Hounyo; Nour Meddahi
  4. Testing Independence for a Large Number of High Dimensional Random Vectors By Guangming Pan; Jiti Gao; Yanrong Yang
  5. Testing for common cycles in non-stationary VARs with varied frecquency data By Hecq A.W.; Urbain J.R.Y.J.; Götz T.B.
  6. Structural-break models under mis-specification: implications for forecasting By Boonsoo Koo; Myung Hwan Seo
  7. A Simple Approach to Treatment Effects on Durations When the Treatment Timing is Chosen By Lee, Myoung-jae; Johansson, Per
  8. A New Asymmetric GARCH Model: Testing, Estimation and Application By Hatemi-J, Abdulnasser
  9. Multidimensional Poverty Frontiers: Parametric Aggregators Based on Nonparametric Distributions By Esfandiar Maasoumi; Jeffrey S. Racine
  10. Evaluating the accuracy of forecasts from vector autoregressions By Todd E. Clark; Michael W. McCracken
  11. Semi-parametric Analysis of Shape-Invariant Engel Curves with Control Function Approach By Nam H Kim; Patrick W Saart; Jiti Gao
  12. Exact Statistics of the Gap and Time Interval Between the First Two Maxima of Random Walks By Satya N. Majumdar; Philippe Mounaix; Gregory Schehr
  13. Conditiona l Forecast Selection from Many Forecasts: An Application to the Yen/Dollar Exchange Rate By Kei Kawakami
  14. Summarizing large spatial datasets: Spatial principal components and spatial canonical correlation By Bhupathiraju, Samyukta; Verspagen, Bart; Ziesemer, Thomas
  15. The EU-SILC sample design variables: critical review and recommendations By Tim Goedemé
  16. Nonlinear Dynamics and Recurrence Plots for Detecting Financial Crisis. By Peter Martey Addo; Monica Billio; Dominique Guegan
  17. Understanding Exchange Rates Dynamics. By Peter Martey Addo; Monica Billio; Dominique Guegan
  18. Estimating Bayesian Decision Problems with Heterogeneous Priors By Stephen Hansen; Michael McMahon
  19. Star Wars: The Empirics Strike Back By Brodeur, Abel; Lé, Mathias; Sangnier, Marc; Zylberberg, Yanos
  20. Evaluation of Probabilistic Forecasts: Proper Scoring Rules and Moments By Tsyplakov, Alexander
  21. The Identification of Thresholds and Time Delay in Self-Exciting Threshold a Model by Wavelet By Song-Yon Kim; Mun-Chol Kim
  22. The β Family of Inequality Measures By Luis José Imedio Olmedo; Encarnación M. Parrado Gallardo; Elena Bárcena Martín

  1. By: Miguel A. Delgado; Thanasis Stengos
    Abstract: We propose a specification test of a parametrically specified nonlinear model against a weakly specified alternative. We generalize a similar test procedure proposed by Delgado and Stengos (1990) to test the specification of a linear model. We estimate the alternative model by using K nonparametric nearest neighbors (K-NN) in the context of an artificial regression. We derive the asymptotic distribution of the test statistic under the null hypothesis and under a series of local alternatives. Monte Carlo simulations suggest that the test has good power and size characteristics.
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:783&r=ecm
  2. By: Katarzyna Lasak (VU University Amsterdam and CREATES); Carlos Velasco (Universidad Carlos III de Madrid)
    Abstract: We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The fi?rst step consists in estimating the parameters of the model under the null hypothesis of the cointegration rank r = 1, 2, ..., p-1. This step provides consistent estimates of the cointegration degree, the cointegration vectors, the speed of adjustment to the equilibrium parameters and the common trends. In the second step we carry out a sup-likelihood ratio test of no-cointegration on the estimated p - r common trends that are not cointegrated under the null. The cointegration degree is re-estimated in the second step to allow for new cointegration relationships with different memory. We augment the error correction model in the second step to control for stochastic trend estimation effects from the first step. The critical values of the tests proposed depend only on the number of common trends under the null, p - r, and on the interval of the cointegration degrees b allowed, but not on the true cointegration degree b0. Hence, no additional simulations are required to approximate the critical values and this procedure can be convenient for practical purposes. In a Monte Carlo study we analyze the fi?nite sample properties of different specifi?cations of the correction terms and compare our procedure with alternative methods.
    Keywords: Error correction model, Gaussian VAR model, Likelihood ratio tests, Maximum likelihood estimation.
    JEL: C12 C15 C32
    Date: 2013–03–15
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-08&r=ecm
  3. By: Sílvia Gonçalves (Université de Montréal, Département de sciences économiques, CIREQ and CIRANO); Ulrich Hounyo (University of Oxford and CREATES); Nour Meddahi (Toulouse School of Economics)
    Abstract: The main contribution of this paper is to propose bootstrap methods for realized volatility-like estimators defined on pre-averaged returns. In particular, we focus on the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). This statistic can be written (up to a bias correction term) as the (scaled) sum of squared pre-averaged returns, where the pre-averaging is done over all possible non-overlapping blocks of consecutive observations. Pre-averaging reduces the influence of the noise and allows for realized volatility estimation on the pre-averaged returns. The non-overlapping nature of the pre-averaged returns implies that these are asymptotically independent, but possibly heteroskedastic. This motivates the application of the wild bootstrap in this context. We provide a proof of the first order asymptotic validity of this method for percentile and percentile-t intervals. Our Monte Carlo simulations show that the wild bootstrap can improve the finite sample properties of the existing first order asymptotic theory provided we choose the external random variable appropriately. We use empirical work to illustrate its use in practice.
    Keywords: High frequency data, realized volatility, pre-averaging, market microstructure noise, wild bootstrap.
    JEL: C01 C58
    Date: 2013–02–22
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-07&r=ecm
  4. By: Guangming Pan; Jiti Gao; Yanrong Yang
    Abstract: Capturing dependence among a large number of high dimensional random vectors is a very important and challenging problem. By arranging n random vectors of length p in the form of a matrix, we develop a linear spectral statistic of the constructed matrix to test whether the n random vectors are independent or not. Specifically, the proposed statistic can also be applied to n random vectors, each of whose elements can be written as either a linear stationary process or a linear combination of a random vector with independent elements. The asymptotic distribution of the proposed test statistic is established in the case where both p and n go to infinity at the same order. In order to avoid estimating the spectrum of each random vector, a modified test statistic, which is based on splitting the original n vectors into two equal parts and eliminating the term that contains the inner structure of each random vector or time series, is constructed. The facts that the limiting distribution is a normal distribution and there is no need to know the inner structure of each investigated random vector result in simple implementation of the constructed test statistic. Simulation results demonstrate that the proposed test is powerful against many common dependent cases. An empirical application to detecting dependence of the closed prices from several stocks in S&P 500 also illustrates the applicability and effectiveness of our provided test.
    Keywords: Central limit theorem, Covariance stationary time series, Empirical spectral distribution, Independence test, Large dimensional sample covariance matrix; Linear spectral statistics.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2013-9&r=ecm
  5. By: Hecq A.W.; Urbain J.R.Y.J.; Götz T.B. (GSBE)
    Abstract: This paper proposes a new way for detecting the presence of common cyclical features when several time series are observed/sampled at different frequencies, hence generalizing the common-frequency approach introduced by Engle and Kozicki (1993) and Vahid and Engle (1993). We start with the mixed-frequency VAR representation investigated in Ghysels (2012) for stationary time series. For non-stationary time series in levels, we show that one has to account for the presence of two sets of long-run relationships. The First set is implied by identities stemming from the fact that the differences of the high-frequency I(1) regressors are stationary. The second set comes from possible additional long-run relationships between one of the high-frequency series and the low-frequency variables. Our transformed VECM representations extend the results of Ghysels (2012) and are very important for determining the correct set of variables to be used in a subsequent common cycle investigation. This has some empirical implications both for the behavior of the test statistics as well as for forecasting. Empirical analyses with the quarterly real GNP and monthly industrial production indices for, respectively, the U.S. and Germany illustrate our new approach. This is also investigated in a Monte Carlo study, where we compare our proposed mixed-frequency models with models stemming from classical temporal aggregation methods.
    Keywords: Regional and Urban History: General;
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:dgr:umagsb:2013002&r=ecm
  6. By: Boonsoo Koo; Myung Hwan Seo
    Abstract: This paper shows that in the presence of model mis-specification, the conventional inference procedures for structural-break models are invalid. In doing so, we establish new distribution theory for structural break models under the relaxed assumption that our structural break model is the best linear approximation of the true but unknown data generating process. Our distribution theory involves cube-root asymptotics and it is used to shed light on forecasting practice. We show that the conventional forecasting methods do not necessarily produce the best forecasts in our setting. We also propose a new forecasting strategy, which incorporates our new distribution theory, and apply our forecasting method to numerous macroeconomic data. The performance of various contemporary forecasting methods is compared to ours.
    Keywords: structural breaks, forecasting, mis-specification, cube-root asymptotics, bagging
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2013-8&r=ecm
  7. By: Lee, Myoung-jae (Korea University); Johansson, Per (IFAU)
    Abstract: When a treatment unambiguously defines the treatment and control groups at a given time point, its effects are usually found by comparing the two groups' mean responses. But there are many cases where the treatment timing is chosen, for which the conventional approach fails. This paper sets up an ideal causal framework for such cases to propose a simple gamma-mixed proportional-hazard approach with three durations: the waiting time until treatment, the untreated duration from the baseline, and the treated duration from the treatment timing. To implement the proposal, we use semiparametric piecewise-constant hazards as well as Weibull hazards with a multiplicative gamma unobserved heterogeneity affecting all three durations. Despite the three durations interwoven in complex ways, surprisingly simple closed-form likelihoods are obtained whose maximization converges well. The estimators are applied to the same data as used by Fredriksson and Johansson (2008) for employment subsidy effects on unemployment duration to find about 11.1 month reduction.
    Keywords: treatment effect, duration, treatment timing, proportional hazard, gamma heterogeneity
    JEL: C21 C41
    Date: 2013–02
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp7249&r=ecm
  8. By: Hatemi-J, Abdulnasser
    Abstract: Since the seminal work by Engle (1982), the autoregressive conditional heteroscedasticity (ARCH) model has been an important tool for estimating the time-varying volatility as a measure of risk. Numerous extensions of this model have been put forward in the literature. The current paper offers an alternative approach for dealing with asymmetry in the underlying volatility model. Unlike previous papers that have dealt with asymmetry, this paper suggests to explicitly separate the positive shocks from the negative ones in the ARCH modeling approach. A test statistic is suggested for testing the null hypothesis of no asymmetric ARCH effects. In case the null hypothesis is rejected, the model can be estimated by using the maximum likelihood method. The suggested asymmetric volatility approach is applied to modeling separately the potential time-varying volatility in markets that are rising or falling by using the changes in the world market stock price index.
    Keywords: GARCH; Asymmetry; Modelling volatility; Hypothesis testing, World stock price index.
    JEL: C12 C32 G10
    Date: 2013–03–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:45170&r=ecm
  9. By: Esfandiar Maasoumi; Jeffrey S. Racine
    Abstract: We propose a new technique for the estimation of multidimensional evaluation functions. Technical advances allow nonparametric inference on the joint distribution of continuous and discrete indicators of well-being, such as income and health, conditional on joint values of other continuous and discrete attributes, such as education and geographical groupings. In a multiattribute setting, "quantiles" are "frontiers" that define equivalent sets of covariate values. We identify these frontiers nonparametrically at first. Then we suggest "parametrically equivalent" characterizations of these frontiers that reveal likely, but different, weights for and substitutions between different attributes for different groups, and at different quantiles. These estimated parametric functionals are "ideal" in a certain sense which we make clear. They correspond directly to measures of aggregate well-being popularized in the earliest multidimensional inequality measures in Maasoumi (1986). This new approach resolves a classic problem of assigning weights to dimensions of well-being, as well as empirically incorporating the key component in multidimensional analysis, the relationship between the attributes. It introduces a new way to robust estimation of "quantile frontiers", allowing "complete" assessments, such as multidimensional poverty measurements. We discover massive heterogeneity in individual evaluation functions. This leads us to perform robust, weak uniform rankings as afforded by nonparametric tests for stochastic dominance. A demonstration is provided based on the Indonesian data analyzed for multidimensional poverty in Maasoumi & Lugo (2008).
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:mcm:deptwp:2013-07&r=ecm
  10. By: Todd E. Clark; Michael W. McCracken
    Abstract: This paper surveys recent developments in the evaluation of point and density forecasts in the context of forecasts made by Vector Autoregressions. Specific emphasis is placed on highlighting those parts of the existing literature that are applicable to direct multi-step forecasts and those parts that are applicable to iterated multi-step forecasts. This literature includes advancements in the evaluation of forecasts in population (based on true, unknown model coefficients) and the evaluation of forecasts in the finite sample (based on estimated model coefficients). The paper then examines in Monte Carlo experiments the finite-sample properties of some tests of equal forecast accuracy, focusing on the comparison of VAR forecasts to AR forecasts. These experiments show the tests to behave as should be expected given the theory. For example, using critical values obtained by bootstrap methods, tests of equal accuracy in population have empirical size about equal to nominal size.
    Keywords: Economic forecasting ; Vector autoregression
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2013-010&r=ecm
  11. By: Nam H Kim; Patrick W Saart; Jiti Gao
    Abstract: An extended generalised partially linear single-index (EGPLSI) model provides flexibility of a partially linear model and a single-index model. Furthermore, it also allows for the analysis of the shape-invariant specification. Nonetheless, the model's practicality in the empirical studies has been hampered by lack of appropriate estimation procedure and method to deal with endogeneity. In the current paper, we establish an alternative control function approach to address the endogeneity issue in the estimation of the EGPLSI model. We also show that all attractive features of the EGPLSI model discussed in the literature are still available under the proposed estimation procedure. Economic literature suggests that semiparametric technique is an important tool for an empirical analysis of Engel curves, which often involves endogeneity of the total expenditure. We show that our newly developed method is applicable and able to address the endogeneity issue involved in semiparametric analysis of the empirical Engel curves.
    Keywords: Control function approach, endogeneity, generalised partially linear single-index, semiparametric regression.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2013-10&r=ecm
  12. By: Satya N. Majumdar; Philippe Mounaix; Gregory Schehr
    Abstract: We investigate the statistics of the gap, G_n, between the two rightmost positions of a Markovian one-dimensional random walker (RW) after n time steps and of the duration, L_n, which separates the occurrence of these two extremal positions. The distribution of the jumps \eta_i's of the RW, f(\eta), is symmetric and its Fourier transform has the small k behavior 1-\hat{f}(k)\sim| k|^\mu with 0 < \mu \leq 2. We compute the joint probability density function (pdf) P_n(g,l) of G_n and L_n and show that, when n \to \infty, it approaches a limiting pdf p(g,l). The corresponding marginal pdf of the gap, p_{\rm gap}(g), is found to behave like p_{\rm gap}(g) \sim g^{-1 - \mu} for g \gg 1 and 0<\mu < 2. We show that the limiting marginal distribution of L_n, p_{\rm time}(l), has an algebraic tail p_{\rm time}(l) \sim l^{-\gamma(\mu)} for l \gg 1 with \gamma(1<\mu \leq 2) = 1 + 1/\mu, and \gamma(0<\mu<1) = 2. For l, g \gg 1 with fixed l g^{-\mu}, p(g,l) takes the scaling form p(g,l) \sim g^{-1-2\mu} \tilde p_\mu(l g^{-\mu}) where \tilde p_\mu(y) is a (\mu-dependent) scaling function. We also present numerical simulations which verify our analytic results.
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1303.4607&r=ecm
  13. By: Kei Kawakami
    Abstract: This paper proposes a new method for forecast selection from a pool of many forecasts. The method uses conditional information as proposed by Giacomini and White (2006). It also extends their pairwise switching method to a situation with many forecasts. I apply the method to the monthly yen/dollar exchange rate and show empirically that my method of switching forecasting models reduces forecast errors compared with a single model.
    Keywords: Conditional predictive ability; Exchange rate; Forecasting; Forecast combinations; Model selection
    JEL: C52 C53 F31 F37
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:mlb:wpaper:1167&r=ecm
  14. By: Bhupathiraju, Samyukta (UNU-MERIT/MGSoG); Verspagen, Bart (UNU-MERIT/MGSoG, and Maastricht University); Ziesemer, Thomas (UNU-MERIT/MGSoG, and Maastricht University)
    Abstract: We propose a method for spatial principal components analysis that has two important advantages over the method that Wartenberg (1985) proposed. The first advantage is that, contrary to Wartenberg's method, our method has a clear and exact interpretation: it produces a summary measure (component) that itself has maximum spatial correlation. Second, an easy and intuitive link can be made to canonical correlation analysis. Our spatial canonical correlation analysis produces summary measures of two datasets (e.g., each measuring a different phenomenon), and these summary measures maximize the spatial correlation between themselves. This provides an alternative weighting scheme as compared to spatial principal components analysis. We provide example applications of the methods and show that our variant of spatial canonical correlation analysis may produce rather different results than spatial principal components analysis using Wartenberg's method. We also illustrate how spatial canonical correlation analysis may produce different results than spatial principal components analysis.
    Keywords: spatial principal components analysis, spatial canonical correlation analysis, spatial econometrics, Moran coefficients, spatial concentration
    JEL: R10 R15 C10
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:dgr:unumer:2013011&r=ecm
  15. By: Tim Goedemé
    Abstract: The EU Statistics on Income and Living Conditions (EU-SILC) are the principal data source for analysing the social situation in Europe. Given that EU-SILC is based on a representative sample in each participating country, estimates based on EU-SILC are subject to sampling variance. One of the principal determinants of the sampling variance is the sample design that has been used for drawing the sample. Therefore, standard errors, significance tests and confidence intervals should be computed taking the sample design as much as possible into account. For doing so, good sample design variables are an indispensable starting point. In this paper, I review the quality of sample design information in the EU-SILC dataset and formulate recommendations for data producers about how to improve the quality of sample design variables and for data users about how to make optimal use of the information that is already available in the EU-SILC UDB.
    Keywords: EU-SILC, sample design, sample design variables, sampling variance, Standard error
    JEL: D31 O52
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:hdl:wpaper:1302&r=ecm
  16. By: Peter Martey Addo (Centre d'Economie de la Sorbonne et Università di Venezia - Dipartimento di Economia); Monica Billio (Università di Venezia - Dipartimento di Economia); Dominique Guegan (Centre d'Economie de la Sorbonne)
    Abstract: Identification of financial bubbles and crisis is a topic of major concern since it is important to prevent collapses that can severely impact nations and economies. Our analysis deals with the use of the recently proposed "delay vector variance" (DVV) method, which examines local predictability of a signal in the phase space to detect the presence of determinism and nonlinearity in a time series. Optimal embedding parameters used in the DVV analysis are obtained via a differential entropy based method using wavelet-based surrogates. We exploit the concept of recurrence plots to study the stock market to locate hidden patterns, non-stationarity, and to examine the nature of these plots in events of financial crisis. In particular, the recurrence plots are employed to detect and characterize financial cycles. A comprehensive analysis of the feasibility of this approach is provided. We show that our methodology is useful in the diagnosis and detection of financial bubbles, which have significantly impacted economic upheavals in the past few decades.
    Keywords: Nonlinearity analysis, surrogates, Delay vector variance (DVV) method, wavelets, financial bubbles, embedding parameters, recurrence plots.
    JEL: C14 C40 E32 G01
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:13024&r=ecm
  17. By: Peter Martey Addo (Centre d'Economie de la Sorbonne et Università di Venezia - Dipartimento di Economia); Monica Billio (Università di Venezia - Dipartimento di Economia); Dominique Guegan (Centre d'Economie de la Sorbonne)
    Abstract: With the emergence of the chaos theory and the method of surrogates data, nonlinear approaches employed in analysing time series typically suffer from high computational complexity and lack of straightforward explanation. Therefore, the need for methods capable of characterizing time series in terms of their linear, nonlinear, deterministic and stochastic nature are preferable. In this paper, we provide a signal modality analysis on a variety of exchange rates. The analysis is achieved by using the recently proposed "delay vector variance" (DVV) method, which examines local predictability of a signal in the phase space to detect the presence of determinism and nonlinearity in a time series. Optimal embedding parameters used in the DVV analysis are obtain via differential entropy based method using wavelet-based surrogates. A comprehensive analysis of the feasibility of this approach is provided. The empirical results show that the DVV method can be opted as an alternative way to understanding exchange rates dynamics.
    Keywords: Nonlinearity analysis, exchange rates, surrogates, Delay vector variance (DVV)method, wavelets.
    JEL: C14 C22 C40 F31
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:13023&r=ecm
  18. By: Stephen Hansen; Michael McMahon
    Abstract: In many areas of economics there is a growing interest in how expertise and preferences drive individual and group decision making under uncertainty. Increasingly, we wish to estimate such models to quantify which of these drive decision making. In this paper we propose a new channel through which we can empirically identify expertise and preference parameters by using variation in decisions over heterogeneous priors. Relative to existing estimation approaches, our "Prior-Based Identification" extends the possible environments which can be estimated, and also substantially improves the accuracy and precision of estimates in those environments which can be estimated using existing methods.
    Keywords: Bayesian decision making, expertise, preferences, estimation
    JEL: D72 D81 C13
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:683&r=ecm
  19. By: Brodeur, Abel (Paris School of Economics); Lé, Mathias (Paris School of Economics); Sangnier, Marc (University of Aix-Marseille II); Zylberberg, Yanos (CREI and Universitat Pompeu Fabra)
    Abstract: Journals favor rejection of the null hypothesis. This selection upon tests may distort the behavior of researchers. Using 50,000 tests published between 2005 and 2011 in the AER, JPE, and QJE, we identify a residual in the distribution of tests that cannot be explained by selection. The distribution of p-values exhibits a camel shape with abundant p-values above 0.25, a valley between 0.25 and 0.10 and a bump slightly below 0.05. The missing tests (with p-values between 0.25 and 0.10) can be retrieved just after the 0.05 threshold and represent 10% to 20% of marginally rejected tests. Our interpretation is that researchers might be tempted to inflate the value of those almost-rejected tests by choosing a "significant" specification. We propose a method to measure inflation and decompose it along articles' and authors' characteristics.
    Keywords: hypothesis testing, distorting incentives, selection bias, research in economics
    JEL: A11 B41 C13 C44
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp7268&r=ecm
  20. By: Tsyplakov, Alexander
    Abstract: The paper provides an overview of probabilistic forecasting and discusses a theoretical framework for evaluation of probabilistic forecasts which is based on proper scoring rules and moments. An artificial example of predicting second-order autoregression and an example of predicting the RTSI stock index are used as illustrations.
    Keywords: probabilistic forecast; forecast calibration; probability integral transform; scoring rule; moment condition
    JEL: C52 C53
    Date: 2013–03–18
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:45186&r=ecm
  21. By: Song-Yon Kim; Mun-Chol Kim
    Abstract: In this paper we studied about the wavelet identification of the thresholds and time delay for more general case without the constraint that the time delay is smaller than the order of the model. Here we composed an empirical wavelet from the SETAR (Self-Exciting Threshold Autoregressive) model and identified the thresholds and time delay in the model using it.
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1303.4867&r=ecm
  22. By: Luis José Imedio Olmedo (Dpto. Estadistica y Econometria (68). Facultad de Ciencias Economicas y Empresariales. Campus El Ejido s/n. Universidad de Malaga, 29013 Malaga.); Encarnación M. Parrado Gallardo (Dpto. Estadistica y Econometria (68). Facultad de Ciencias Economicas y Empresariales. Campus El Ejido s/n. Universidad de Malaga, 29013 Malaga); Elena Bárcena Martín (Dpto. Estadistica y Econometria (68). Facultad de Ciencias Economicas y Empresariales. Campus El Ejido s/n. Universidad de Malaga, 29013 Malaga. Telf. +34 952 131 191)
    Abstract: This paper introduces and analyses, both normatively and statistically, a new class of inequality measures. This class generalizes and comprises different well-known families of inequality measures as particular cases. The elements of this new class are obtained by weighting local inequality evaluated through the Bonferroni curve. The weights are the density functions of the beta distributions over [0,1]. Therefore, the weights are not necessarily monotonic. This allows us to choose the inequality measures that are more or less sensitive to changes that could take place in any part of the distribution. As a consequence of the different weighting schemes attached to the indexes, the elements of the class introduce very dissimilar value judgements in the measurement of inequality and welfare. The possibility of choosing the index that focuses on a specific percentile, and not necessarily on the extremes of the distribution, is one of the advantages of our proposal.
    Keywords: Lorenz curve, Bonferroni curve, preference distributions, inequality aversion, beta distribution
    JEL: C10 D31 I38
    Date: 2013–02
    URL: http://d.repec.org/n?u=RePEc:inq:inqwps:ecineq2013-289&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.