nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒12‒15
23 papers chosen by
Sune Karlsson
Orebro University

  1. Comparing variable selection techniques for linear regression: LASSO and Autometrics. By Camila Epprecht; Dominique Guegan; Álvaro Veiga
  2. Rasch Mixture Models for DIF Detection: A Comparison of Old and New Score Specifications By Hannah Frick; Carolin Strobl; Achim Zeileis
  3. Advantages of Non-Normality in Testing Cointegration Rank By Felix Chan
  4. System GMM estimation of panel data models with time varying slope coefficients By Sato, Yoshihiro; Söderbom, Måns
  5. Estimation and Inference under Weak Identi…cation and Persistence: An Application on Forecast-Based Monetary Policy Reaction Function By Jui-Chung Yang; Ke-Li Xu
  6. Properties of Instrumental Variables Estimation in Logit-Based Demand Models: Finite Sample Results By Andrews , Rick L.; Ebbes , Peter
  7. Robust Cointegration Testing in the Presence of Weak Trends, with an Application to the Human Origin of Global Warming By Guillaume Chevillon
  8. Econometric Modelling of Social Bads By William H Greene; Mark N Harris; Preety Srivastava; Xueyan Zhao
  9. Consistent factor estimation in dynamic factor models with structural instability By Brandon J. Bates; Mikkel Plagborg-Møller; James H. Stock; Mark W. Watson
  10. Sieve bootstrapping in the Lee-Carter model By Heinemann A.
  11. DSGE models in the frequency domain By Luca Sala
  12. Long- versus medium-run identification in fractionally integrated VAR models By Tschernig, Rolf; Weber, Enzo; Weigand, Roland
  13. Time Series under Present-Value-Model Short- and Long-run Co-movement Restrictions By Osmani Teixeira de Carvalho Guillén; Alain Hecq; João Victor Issler; Diogo Saraiva
  14. The Role of Hypothesis Testing in the Molding of Econometric Models By Kevin D. Hoover
  15. Resampling in DEA By Kaoru Tone
  16. Model uncertainty and expected return proxies By Jäckel, Christoph
  17. Risk, Return and Volatility Feedback: A Bayesian Nonparametric Analysis By Jensen, Mark J; Maheu, John M
  18. Nonparametric estimation of a compensating variation: the cost of disability By Hancock, Ruth; Morciano, Marcello; Pudney, Stephen
  19. Multidimensional micro-level competitiveness measurement: a SEM-based approach By Rosa Bernardini Papalia; Annalisa Donno
  20. “The use of flexible quantile-based measures in risk assessment” By Jaume Belles-Sampera; Montserrat Guillén; Miguel Santolino
  21. Empirical evidence on inflation expectations in the new Keynesian Phillips curve By Sophocles Mavroeidis; Mikkel Plagborg-Møller; James H. Stock
  22. On the Reception of Haavelmo’s Econometric Thought By Kevin D. Hoover
  23. Identification in continuous triangular systems with unrestricted heterogeneity By Kasy, Maximilian

  1. By: Camila Epprecht (Centre d'Economie de la Sorbonne and Department of Electrical Engineering-Pontifical Catholic University of Rio de Janeiro); Dominique Guegan (Centre d'Economie de la Sorbonne - Paris School of Economics); Álvaro Veiga (Department of Electrical Engineering-Pontifical Catholic University of Rio de Janeiro)
    Abstract: In this paper, we compare two different variable selection approaches for linear regression models: Autometrics (automatic general-to-specific selection) and LASSO (?1-norm regularization). In a simulation study, we show the performance of the methods considering the predictive power (forecast out-of-sample) and the selection of the correct model and estimation (in-sample). The case where the number of candidate variables exceeds the number of observation is considered as well. We also analyze the properties of estimators comparing to the oracle estimator. Finally, we compare both methods in an application to GDP forecasting.
    Keywords: Model selection, variable selection, GETS, Autometrics, LASSO, adaptive LASSO, sparse models, oracle property, time series, GDP forecasting.
    JEL: C51 C52 C53
    Date: 2013–11
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:13080&r=ecm
  2. By: Hannah Frick; Carolin Strobl; Achim Zeileis
    Abstract: Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest DIF tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a score distribution for the Rasch mixture model is introduced here which is restricted to be equal across latent classes. This causes the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study and its application is illustrated in a study of verbal aggression.
    Keywords: mixed Rasch model, Rasch mixture model, DIF detection, score distribution
    JEL: C38 C52 C87
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:inn:wpaper:2013-36&r=ecm
  3. By: Felix Chan (School of Economics and Finance, Curtin University)
    Abstract: Since the seminal work of Engle and Granger (1987) and Johansen (1988), testing for cointegration has become standard practice in analysing economic and financial time series data. Many of the techniques in cointegration analysis require the assumption of normality, which may not always hold. Although there is evidence that these techniques are robust to non-normality, most existing techniques do not seek additional information from non-normality. This is important in at least two cases. Firstly, the number of observations is typically small for macroeconomic time series data, the fact that the underlying distribution may not be normal provides important information that can potentially be useful in testing for cointegrating relationships. Secondly, high frequency financial time series data often shows evidence of non-normal random variables with time-varying second moments and it is unclear how these characteristics affect the standard test of cointegration, such as Johansen's trace and max tests. This paper proposes a new framework derived from Independent Component Analysis (ICA) to test for cointegration. The framework explicitly exploits processes with non-normal distributions and their independence. Monte Carlo simulation shows that the new test is comparable to the Johansen's trace and max tests when the number of observations is large and has a slight advantage over Johansen's tests if the number of observations is limited. Moreover, the computational requirement for this method is relatively mild, which makes this method practical for empirical research.
    Keywords: Blind Source Separation, Independent Component Analysis, Cointegration Rank
    JEL: C13 C32 C53
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:ozl:bcecwp:wp1306&r=ecm
  4. By: Sato, Yoshihiro (Department of Economics, School of Business, Economics and Law, Göteborg University); Söderbom, Måns (Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: We highlight the fact that the Sargan-Hansen test for GMM estimators applied to panel data is a joint test of valid orthogonality conditions and coefficient stability over time. A possible reason why the null hypothesis of valid orthogonality conditions is rejected is therefore that the slope coefficients vary over time. One solution is to estimate an empirical model where the coefficients are time specifi…c. We apply this solution to the system GMM estimatior of the Cobb-Douglas production functions for a selection of Swedish industries, and …find that relaxing the assumption that slope coefficients are constant over time results in considerably more satisfactory outcomes of the Sargan-Hansen test.
    Keywords: panel data; system GMM estimation; time-varying coefficients; overidentifying restrictions
    JEL: C13 C33 C36
    Date: 2013–12–10
    URL: http://d.repec.org/n?u=RePEc:hhs:gunwpe:0577&r=ecm
  5. By: Jui-Chung Yang; Ke-Li Xu
    Abstract: The reaction coefficients of expected inflations and output gaps in the forecast-based monetary policy reaction function may be merely weakly …identified when the smoothing coefficient is close to unity and the nominal interest rates are highly persistent. In this paper we modify the method of Andrews and Cheng (2012, Econometrica)on inference under weak / semi-strong identification to accommodate the persistence issue. Our modification involves the employment of asymptotic theories for near unit root processes and novel drifting sequence approaches. Large sample properties with a desired smooth transition with respect to the true values of parameters are developed for the nonlinear least squares (NLS) estimator and its corresponding t / Wald statistics of a general class of models. Despite the not-consistent-estimability, the conservative confidence sets of weakly-identified parameters of interest can be obtained by inverting the t / Wald tests. We show that the null-imposed least-favorable confidence sets will have correct asymptotic sizes, and the projection-based method may lead to asymptotic over-coverage. Our empirical application suggests that the NLS estimates for the reaction coefficients in U.S.Âs forecast-based monetary policy reaction function for 1987:3Â2007:4 are not accurate sufficiently to rule out the possibility of indeterminacy.
    JEL: C12 C22 E58
    Date: 2013–12–08
    URL: http://d.repec.org/n?u=RePEc:jmp:jm2013:pya307&r=ecm
  6. By: Andrews , Rick L.; Ebbes , Peter
    Abstract: Endogeneity problems in demand models occur when certain factors, unobserved by the researcher, affect both demand and the values of a marketing mix variable set by managers. For example, unobserved factors such as style, prestige, or reputation might result in higher prices for a product and higher demand for that product. If not addressed properly, endogeneity can bias the elasticities of the endogenous variable and subsequent optimization of the marketing mix. In practice, instrumental variables estimation techniques are often used to remedy an endogeneity problem. It is well known that, for linear regression models, the use of instrumental variables techniques with poor quality instruments can produce very poor parameter estimates, in some circumstances even worse than those that result from ignoring the endogeneity problem altogether. The literature has not addressed the consequences of using poor quality instruments to remedy endogeneity problems in nonlinear models, such as logit-based demand models. Using simulation methods, we investigate the effects of using poor quality instruments to remedy endogeneity in logit-based demand models applied to finite-sample datasets. The results show that, even when the conditions for lack of parameter identification due to poor quality instruments do not hold exactly, estimates of price elasticities can still be quite poor. That being the case, we investigate the relative performance of several nonlinear instrumental variables estimation procedures utilizing readily available instruments in finite samples. Our study highlights the attractiveness of the control function approach (Petrin and Train 2010) and readily-available instruments, which together reduce the mean squared elasticity errors substantially for experimental conditions in which the theory-backed instruments are poor in quality. We find important effects for sample size, in particular for the number of brands, for which it is shown that endogeneity problems are exacerbated with increases in the number of brands, especially when poor quality instruments are used. In addition, the number of stores is found to be important for likelihood ratio testing. The results of the simulation are shown to generalize to situations under Nash pricing in oligopolistic markets, to conditions in which cross-sectional preference heterogeneity exists, and to nested logit and probit-based demand specifications as well. Based on the results of the simulation, we suggest a procedure for managing a potential endogeneity problem in logit-based demand models.
    Keywords: Choice Models; Endogeneity; Econometric Models; Instrumental Variables
    JEL: C20 C50 M30
    Date: 2013–12–10
    URL: http://d.repec.org/n?u=RePEc:ebg:heccah:1006&r=ecm
  7. By: Guillaume Chevillon (ESSEC Business School - ESSEC Business School)
    Abstract: Standard tests for the rank of cointegration of a vector autoregressive process present distributions that are affected by the presence of deterministic trends. We consider the recent approach of Demetrescu et al. (2009) who recommend testing a composite null. We assess this methodology in the presence of trends (linear or broken) whose magnitude is small enough not to be detectable at conventional significance levels. We model them using local asymptotics and derive the properties of the test statistics. We show that whether the trend is orthogonal to the cointegrating vector has a major impact on the distributions but that the test combination approach remains valid. We apply of the methodology to the study of cointegration properties between global temperatures and the radiative forcing of human gas emissions. We find new evidence of Granger Causality.
    Keywords: Cointegration ; Deterministic Trend ; Global Warming ; Likelihood Ratio ; Local Trends
    Date: 2013–11
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-00914830&r=ecm
  8. By: William H Greene (Economics Department, Stern School of Business, New York University (NYU)); Mark N Harris (School of Economics and Finance, Curtin University); Preety Srivastava (Department of Econometrics and Business Statistics, Monash University); Xueyan Zhao (Department of Econometrics and Business Statistics, Monash University)
    Abstract: When modelling 'social bads', such as illegal drug consumption, researchers are often faced with a dependent variable characterised by an excessive?amount of zero bservations. Building on the recent literature on hurdle and double-hurdle models, we propose a double-inflated modelling framework, where the zero observations are allowed to come from: non-participants; participant misreporters (who have larger loss functions associated with a truthful response); and infrequent consumers. Due to our empirical application, the model is derived for the case of an ordered discrete dependent variable. However, it is similarly possible to augment other such zero-inflated models (zero-inflated count models, and double-hurdle models for continuous variables, for example). The model is then applied to a consumer choice problem of cannabis consumption. As expected we find that misreporting has a significant (estimated) effect on the recorded incidence of marijuana. Specifically, we find that 14% of the zeros reported in the survey is estimated to come from individuals who have misreported their participation.
    Keywords: Ordered outcomes, discrete data, cannabis consumption, zero-inflated responses
    JEL: C3 D1 I1
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:ozl:bcecwp:wp1305&r=ecm
  9. By: Brandon J. Bates; Mikkel Plagborg-Møller; James H. Stock; Mark W. Watson
    Abstract: This paper considers the estimation of approximate dynamic factor models when there is temporal instability in the factor loadings. We characterize the type and magnitude of instabilities under which the principal components estimator of the factors is consistent and find that these instabilities can be larger than earlier theoretical calculations suggest. We also discuss implications of our results for the robustness of regressions based on the estimated factors and of estimates of the number of factors in the presence of parameter instability. Simulations calibrated to an empirical application indicate that instability in the factor loadings has a limited impact on estimation of the factor space and diffusion index forecasting, whereas estimation of the number of factors is more substantially affected.
    URL: http://d.repec.org/n?u=RePEc:qsh:wpaper:84631&r=ecm
  10. By: Heinemann A. (GSBE)
    Abstract: This paper studies an alternative approach to construct confidence intervals for parameter estimates of the Lee-Carter model. First, the procedure of obtaining confidence intervals using regular nonparametric i.i.d. bootstrap is specified. Empirical evidence seems to invalidate this approach as it indicates strong autocorrelation and cross correlation in the residuals. A more general approach is introduced, relying on the Sieve bootstrap method, that includes the nonparametric i.i.d. method as a special case. Secondly, this paper examines the performance of the nonparametric i.i.d. and the Sieve bootstrap approach. In an application to a Dutch data set, the Sieve bootstrap method returns much wider confidence intervals compared to the nonparametric i.i.d. approach. Neglecting the residuals dependency structure, the nonparametric i.i.d. bootstrap method seems to construct confidence intervals that are too narrow. Third, the paper discusses an intuitive explanation for the source of autocorrelation and cross correlation within stochastic mortality models.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:dgr:umagsb:2013069&r=ecm
  11. By: Luca Sala
    Abstract: We use frequency domain techniques to estimate a medium-scale DSGE model on different frequency bands. We show that goodness of t, forecasting performance and parameter estimates vary substantially with the frequency bands over which the model is estimated. Estimates obtained using subsets of frequencies are characterized by signicantly different parameters, an indication that the model cannot match all frequencies with one set of parameters. In particular, we find that: i) the low frequency properties of the data strongly affect parameter estimates obtained in the time domain; ii) the importance of economic frictions in the model changes when different subsets of frequencies are used in estimation. This is particularly true for the investment cost friction and habit persistence: when low frequencies are present in the estimation, the investment cost friction and habit persistence are estimated to be higher than when low frequencies are absent. JEL Classication: C11, C32, E32 Keywords: DSGE models, frequency domain, band maximum likelihood.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:igi:igierp:504&r=ecm
  12. By: Tschernig, Rolf; Weber, Enzo; Weigand, Roland
    Abstract: We state that long-run restrictions that identify structural shocks in VAR models with unit roots lose their original interpretation if the fractional integration order of the affected variable is below one. For such fractionally integrated models we consider a medium-run approach that employs restrictions on variance contributions over finite horizons. We show for alternative identification schemes that letting the horizon tend to infinity is equivalent to imposing the restriction of Blanchard and Quah (1989) introduced for the unit-root case.
    Keywords: Structural vector autoregression; long-run restriction; finite-horizon identification; fractional integration; impulse response function
    JEL: C32 C50
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:bay:rdwiwi:29162&r=ecm
  13. By: Osmani Teixeira de Carvalho Guillén; Alain Hecq; João Victor Issler; Diogo Saraiva
    Abstract: This paper has two original contributions. First, we show that PV relationships entail a weak-form SCCF restriction, as in Hecq et al. (2006) and in Athanasopoulos et al. (2011), and implies a polynomial serial correlation common feature relationship (Cubadda and Hecq, 2001). These represent short-run restrictions on the dynamic multivariate systems, something that has not been discussed before. Our second contribution relates to forecasting multivariate time series that are subject to PVM restrictions, which has a wide application in macroeconomics and finance. We benefit from previous work showing the benefits for forecasting when the short-run dynamics of the system is constrained. The reason why appropriate common-cycle restrictions improve forecasting is because it finds linear combinations of the first differences of the data that cannot be forecast by past information. This embeds natural exclusion restrictions preventing the estimation of useless parameters, which would otherwise contribute to the increase of forecast variance with no expected reduction in bias. We applied the techniques discussed in this paper to data known to be subject to PV restrictions: the online series maintained and updated by Robert J. Shiller at http://www.econ.yale.edu/~shiller/data.htm. We focus on three different data sets. The first includes the levels of interest rates with long and short maturities, the second includes the level of real price and dividend for the S&P composite index, and the third includes the logarithmic transformation of prices and dividends. Our exhaustive investigation of six different multivariate models reveals that better forecasts can be achieved when restrictions are applied to them. Specifically, cointegration restrictions, and cointegration and weak-form SCCF rank restrictions, as well as all the set of theoretical restrictions embedded in the PVM
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:bcb:wpaper:330&r=ecm
  14. By: Kevin D. Hoover
    Abstract: The paper is a keynote lecture from the Tilburg-Madrid Conference on Hypothesis Tests: Foundations and Applications at the Universidad Nacional de Educación a Distancia (UNED) Madrid, Spain, 15-16 December 2011. It addresses the role of tests of statistical hypotheses (specification tests) in selection of a statistically admissible model in which to evaluate economic hypotheses. The issue is formulated in the context of recent philosophical accounts on the nature of models and related to some results in the literature on specification search.
    Keywords: statistical testing, hypothesis tests, models, general-to-specific specification search, optional stopping, severe tests, costs of search, costs of inference, extreme-bounds analysis, LSE econometric methodology
    JEL: B41 C18 C12 C50
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:hec:heccee:2012-3&r=ecm
  15. By: Kaoru Tone (National Graduate Institute for Policy Studies)
    Abstract: In this paper, we propose new resampling models in data envelopment analysis (DEA). Input/output values are subject to change for several reasons, e.g., measurement errors, hysteretic factors, arbitrariness and so on. Furthermore, these variations differ in their input/output items and their decision-making units (DMU). Hence, DEA efficiency scores need to be examined by considering these factors. Resampling based on these variations is necessary for gauging the confidence interval of DEA scores. We propose three resampling models. The first one assumes downside and upside measurement error rates for each input/output, which are common to all DMUs. We resample data following the triangular distribution that the downside and upside errors indicate around the observed data. The second model utilizes historical data, e.g., past-present, for estimating data variations, imposing chronological order weights which are supplied by Lucas series (a variant of Fibonacci series). The last one deals with future prospects. This model aims at forecasting the future efficiency score and its confidence interval for each DMU.
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:ngi:dpaper:13-23&r=ecm
  16. By: Jäckel, Christoph
    Abstract: Over the last two decades, alternative expected return proxies have been proposed with substantially lower variation than realized returns. This helped to reduce parameter uncertainty and to identify many seemingly robust relations between expected returns and variables of interest, which would have gone unnoticed with the use of realized returns. In this study, I argue that these findings could be spurious due to the ignorance of model uncertainty: because a researcher does not know which of the many proposed proxies is measured with the least error, any inference conditional on only one proxy can lead to overconfident decisions. As a solution, I introduce a Bayesian model averaging (BMA) framework to directly incorporate model uncertainty into the statistical analysis. I employ this approach to three examples from the implied cost of capital (ICC) literature and show that the incorporation of model uncertainty can severely widen the coverage regions, thereby leveling the playing field between realized returns and alternative expected return proxies.
    Keywords: Time-varying expected returns, implied cost of capital, asset pricing, model averaging, model selection
    JEL: C11 G12
    Date: 2013–12–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:51978&r=ecm
  17. By: Jensen, Mark J; Maheu, John M
    Abstract: The relationship between risk and return is one of the most studied topics in finance. The majority of the literature is based on a linear, parametric relationship between expected returns and conditional volatility. However, there is no theoretical justification for the relationship to be linear. This paper models the contemporaneous relationship between market excess returns and log-realized variances nonparametrically with an infinite mixture representation of their joint distribution. With this nonparametric representation, the conditional distribution of excess returns given log-realized variance will also have a infinite mixture representation but with probabilities and arguments depending on the value of realized variance. Our nonparametric approach allows for deviation from Gaussianity by allowing for higher order non-zero moments. It also allows for a smooth nonlinear relationship between the conditional mean of excess returns and log-realized variance. Parsimony of our nonparametric approach is guaranteed by the almost surely discrete Dirichlet process prior used for the mixture weights and arguments. We find strong robust evidence of volatility feedback in monthly data. Once volatility feedback is accounted for, there is an unambiguous positive relationship between expected excess returns and expected log-realized variance. This relationship is nonlinear. Volatility feedback impacts the whole distribution and not just the conditional mean.
    Keywords: Dirichlet process prior, MCMC, realized variance
    JEL: C11 C3 C32 G1 G12
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:52132&r=ecm
  18. By: Hancock, Ruth; Morciano, Marcello; Pudney, Stephen
    Abstract: We propose a nonparametric matching approach to estimation of implicit costs based on the compensating variation (CV) principle. We apply the method to estimate the additional personal costs experienced by disabled older people in Great Britain, finding that those costs are substantial, averaging in the range 48-61 a week, compared with the mean level of state disability benefit (28) or total public support (47) received. Estimated costs rise strongly with the severity of disability. We compare the nonparametric approach with the standard parametric method, finding that the latter tends to generate large overestimates unless conditions are ideal. The nonparametric approach has much to recommend it.
    Date: 2013–12–04
    URL: http://d.repec.org/n?u=RePEc:ese:iserwp:2013-26&r=ecm
  19. By: Rosa Bernardini Papalia (Università di Bologna); Annalisa Donno (Università di Bologna)
    Abstract: The concept of competitiveness, for a long time considered as strictly connected to economic and financial performances, evolved, above all in recent years, toward new, wider interpretations disclosing its multidimensional nature. The shift to a multidimensional view of the phenomenon has excited an intense debate involving theoretical reflections on the features characterizing it, as well as methodological considerations on its assessment and measurement. The present research has a twofold objective: going in depth with the study of tangible and intangible aspect characterizing multidimensional competitive phenomena by assuming a micro-level point of view, and measuring competitiveness through a model-based approach. Specifically, we propose a non-parametric approach to Structural Equation Models techniques for the computation of multidimensional composite measures. Structural Equation Models tools will be used for the development of the empirical application on the Italian case: a model based micro-level competitiveness indicator for the measurement of the phenomenon on a large sample of Italian small and medium enterprises will be constructed.
    Keywords: Micro-level competitiveness, model-based composite indicators, Structural Equation Models, intangible assets. Competitività, indicatori compositi, Modelli di Equazioni Strutturali, beni intangibili.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:bot:quadip:123&r=ecm
  20. By: Jaume Belles-Sampera (Faculty of Economics, University of Barcelona); Montserrat Guillén (Faculty of Economics, University of Barcelona); Miguel Santolino (Faculty of Economics, University of Barcelona)
    Abstract: A new family of distortion risk measures -GlueVaR- is proposed in Belles- Sampera et al. (2013) to procure a risk assessment lying between those provided by common quantile-based risk measures. GlueVaR risk measures may be expressed as a combination of these standard risk measures. We show here that this relationship may be used to obtain approximations of GlueVaR measures for general skewed distribution functions using the Cornish-Fisher expansion. A subfamily of GlueVaR measures satises the tail-subadditivity property. An example of risk measurement based on real insurance claim data is presented, where implications of tail-subadditivity in the aggregation of risks are illustrated.
    Keywords: quantiles, subadditivity, tails, risk management, Value-at-Risk. JEL classification:
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:ira:wpaper:201323&r=ecm
  21. By: Sophocles Mavroeidis; Mikkel Plagborg-Møller; James H. Stock
    Abstract: We review the main identification strategies and empirical evidence on the role of expectations in the new Keynesian Phillips curve, paying particular attention to the issue of weak identification. Our goal is to provide a clear understanding of the role of expectations that integrates across the different papers and specifications in the literature. We discuss the properties of the various limited information econometric methods used in the literature and provide explanations of why they produce conflicting results. Using a common data set and a flexible empirical approach, we find that researchers are faced with substantial specification uncertainty, as different combinations of various a priori reasonable specification choices give rise to a vast set of point estimates. Moreover, given a specification, estimation is subject to considerable sampling uncertainty due to weak identification. We highlight the assumptions that seem to matter most for identification and the configuration of point estimates. We conclude that the literature has reached a limit on how much can be learned about the new Keynesian Phillips curve from aggregate macroeconomic time series. New identification approaches and new data sets are needed to reach an empirical consensus.
    URL: http://d.repec.org/n?u=RePEc:qsh:wpaper:84656&r=ecm
  22. By: Kevin D. Hoover
    Abstract: Trygve Haavelmo’s The Probability Approach in Econometrics (1944) has been widely regarded as the foundation document of modern econometrics. Nevertheless, its significance has been interpreted in widely different ways. Some modern economists regard it as a blueprint for a provocative, but ultimately unsuccessful, program dominated by the need for a priori theoretical identification of econometric models. They call for new techniques that better acknowledge the interrelationship of theory and data. Others credit Haavelmo with an approach that focuses on statistical adequacy rather than theoretical identification. They see many of Haavelmo’s deepest insights as having been unduly neglected. The current paper uses bibliometric techniques and a close reading of econometrics articles and textbooks to trace the way in which the economics profession received, interpreted, and transmitted Haavelmo’s ideas. A key irony is that the first group calls for a reform of econometric thinking that goes several steps beyond Haavelmo’s initial vision; while the second group argues that essentially what the first group advocates was already in Haavelmo’s Probability Approach from the beginning.
    Keywords: Trygve Haavelmo, econometrics, history of econometrics, the probability approach, econometric methodology, Cowles Commission
    JEL: B23 B40 C10
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:hec:heccee:2012-4&r=ecm
  23. By: Kasy, Maximilian
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:qsh:wpaper:33257&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.