
on Econometrics 
By:  Camila Epprecht (Centre d'Economie de la Sorbonne and Department of Electrical EngineeringPontifical Catholic University of Rio de Janeiro); Dominique Guegan (Centre d'Economie de la Sorbonne  Paris School of Economics); Álvaro Veiga (Department of Electrical EngineeringPontifical Catholic University of Rio de Janeiro) 
Abstract:  In this paper, we compare two different variable selection approaches for linear regression models: Autometrics (automatic generaltospecific selection) and LASSO (?1norm regularization). In a simulation study, we show the performance of the methods considering the predictive power (forecast outofsample) and the selection of the correct model and estimation (insample). The case where the number of candidate variables exceeds the number of observation is considered as well. We also analyze the properties of estimators comparing to the oracle estimator. Finally, we compare both methods in an application to GDP forecasting. 
Keywords:  Model selection, variable selection, GETS, Autometrics, LASSO, adaptive LASSO, sparse models, oracle property, time series, GDP forecasting. 
JEL:  C51 C52 C53 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:mse:cesdoc:13080&r=ecm 
By:  Hannah Frick; Carolin Strobl; Achim Zeileis 
Abstract:  Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest DIF tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a score distribution for the Rasch mixture model is introduced here which is restricted to be equal across latent classes. This causes the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study and its application is illustrated in a study of verbal aggression. 
Keywords:  mixed Rasch model, Rasch mixture model, DIF detection, score distribution 
JEL:  C38 C52 C87 
Date:  2013–12 
URL:  http://d.repec.org/n?u=RePEc:inn:wpaper:201336&r=ecm 
By:  Felix Chan (School of Economics and Finance, Curtin University) 
Abstract:  Since the seminal work of Engle and Granger (1987) and Johansen (1988), testing for cointegration has become standard practice in analysing economic and financial time series data. Many of the techniques in cointegration analysis require the assumption of normality, which may not always hold. Although there is evidence that these techniques are robust to nonnormality, most existing techniques do not seek additional information from nonnormality. This is important in at least two cases. Firstly, the number of observations is typically small for macroeconomic time series data, the fact that the underlying distribution may not be normal provides important information that can potentially be useful in testing for cointegrating relationships. Secondly, high frequency financial time series data often shows evidence of nonnormal random variables with timevarying second moments and it is unclear how these characteristics affect the standard test of cointegration, such as Johansen's trace and max tests. This paper proposes a new framework derived from Independent Component Analysis (ICA) to test for cointegration. The framework explicitly exploits processes with nonnormal distributions and their independence. Monte Carlo simulation shows that the new test is comparable to the Johansen's trace and max tests when the number of observations is large and has a slight advantage over Johansen's tests if the number of observations is limited. Moreover, the computational requirement for this method is relatively mild, which makes this method practical for empirical research. 
Keywords:  Blind Source Separation, Independent Component Analysis, Cointegration Rank 
JEL:  C13 C32 C53 
Date:  2013–07 
URL:  http://d.repec.org/n?u=RePEc:ozl:bcecwp:wp1306&r=ecm 
By:  Sato, Yoshihiro (Department of Economics, School of Business, Economics and Law, Göteborg University); Söderbom, Måns (Department of Economics, School of Business, Economics and Law, Göteborg University) 
Abstract:  We highlight the fact that the SarganHansen test for GMM estimators applied to panel data is a joint test of valid orthogonality conditions and coefficient stability over time. A possible reason why the null hypothesis of valid orthogonality conditions is rejected is therefore that the slope coefficients vary over time. One solution is to estimate an empirical model where the coefficients are time specifi…c. We apply this solution to the system GMM estimatior of the CobbDouglas production functions for a selection of Swedish industries, and …find that relaxing the assumption that slope coefficients are constant over time results in considerably more satisfactory outcomes of the SarganHansen test. 
Keywords:  panel data; system GMM estimation; timevarying coefficients; overidentifying restrictions 
JEL:  C13 C33 C36 
Date:  2013–12–10 
URL:  http://d.repec.org/n?u=RePEc:hhs:gunwpe:0577&r=ecm 
By:  JuiChung Yang; KeLi Xu 
Abstract:  The reaction coefficients of expected inflations and output gaps in the forecastbased monetary policy reaction function may be merely weakly Â identified when the smoothing coefficient is close to unity and the nominal interest rates are highly persistent. In this paper we modify the method of Andrews and Cheng (2012, Econometrica)on inference under weak / semistrong identification to accommodate the persistence issue. Our modification involves the employment of asymptotic theories for near unit root processes and novel drifting sequence approaches. Large sample properties with a desired smooth transition with respect to the true values of parameters are developed for the nonlinear least squares (NLS) estimator and its corresponding t / Wald statistics of a general class of models. Despite the notconsistentestimability, the conservative confidence sets of weaklyidentified parameters of interest can be obtained by inverting the t / Wald tests. We show that the nullimposed leastfavorable confidence sets will have correct asymptotic sizes, and the projectionbased method may lead to asymptotic overcoverage. Our empirical application suggests that the NLS estimates for the reaction coefficients in U.S.Âs forecastbased monetary policy reaction function for 1987:3Â2007:4 are not accurate sufficiently to rule out the possibility of indeterminacy. 
JEL:  C12 C22 E58 
Date:  2013–12–08 
URL:  http://d.repec.org/n?u=RePEc:jmp:jm2013:pya307&r=ecm 
By:  Andrews , Rick L.; Ebbes , Peter 
Abstract:  Endogeneity problems in demand models occur when certain factors, unobserved by the researcher, affect both demand and the values of a marketing mix variable set by managers. For example, unobserved factors such as style, prestige, or reputation might result in higher prices for a product and higher demand for that product. If not addressed properly, endogeneity can bias the elasticities of the endogenous variable and subsequent optimization of the marketing mix. In practice, instrumental variables estimation techniques are often used to remedy an endogeneity problem. It is well known that, for linear regression models, the use of instrumental variables techniques with poor quality instruments can produce very poor parameter estimates, in some circumstances even worse than those that result from ignoring the endogeneity problem altogether. The literature has not addressed the consequences of using poor quality instruments to remedy endogeneity problems in nonlinear models, such as logitbased demand models. Using simulation methods, we investigate the effects of using poor quality instruments to remedy endogeneity in logitbased demand models applied to finitesample datasets. The results show that, even when the conditions for lack of parameter identification due to poor quality instruments do not hold exactly, estimates of price elasticities can still be quite poor. That being the case, we investigate the relative performance of several nonlinear instrumental variables estimation procedures utilizing readily available instruments in finite samples. Our study highlights the attractiveness of the control function approach (Petrin and Train 2010) and readilyavailable instruments, which together reduce the mean squared elasticity errors substantially for experimental conditions in which the theorybacked instruments are poor in quality. We find important effects for sample size, in particular for the number of brands, for which it is shown that endogeneity problems are exacerbated with increases in the number of brands, especially when poor quality instruments are used. In addition, the number of stores is found to be important for likelihood ratio testing. The results of the simulation are shown to generalize to situations under Nash pricing in oligopolistic markets, to conditions in which crosssectional preference heterogeneity exists, and to nested logit and probitbased demand specifications as well. Based on the results of the simulation, we suggest a procedure for managing a potential endogeneity problem in logitbased demand models. 
Keywords:  Choice Models; Endogeneity; Econometric Models; Instrumental Variables 
JEL:  C20 C50 M30 
Date:  2013–12–10 
URL:  http://d.repec.org/n?u=RePEc:ebg:heccah:1006&r=ecm 
By:  Guillaume Chevillon (ESSEC Business School  ESSEC Business School) 
Abstract:  Standard tests for the rank of cointegration of a vector autoregressive process present distributions that are affected by the presence of deterministic trends. We consider the recent approach of Demetrescu et al. (2009) who recommend testing a composite null. We assess this methodology in the presence of trends (linear or broken) whose magnitude is small enough not to be detectable at conventional significance levels. We model them using local asymptotics and derive the properties of the test statistics. We show that whether the trend is orthogonal to the cointegrating vector has a major impact on the distributions but that the test combination approach remains valid. We apply of the methodology to the study of cointegration properties between global temperatures and the radiative forcing of human gas emissions. We find new evidence of Granger Causality. 
Keywords:  Cointegration ; Deterministic Trend ; Global Warming ; Likelihood Ratio ; Local Trends 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:hal:journl:hal00914830&r=ecm 
By:  William H Greene (Economics Department, Stern School of Business, New York University (NYU)); Mark N Harris (School of Economics and Finance, Curtin University); Preety Srivastava (Department of Econometrics and Business Statistics, Monash University); Xueyan Zhao (Department of Econometrics and Business Statistics, Monash University) 
Abstract:  When modelling 'social bads', such as illegal drug consumption, researchers are often faced with a dependent variable characterised by an excessive?amount of zero bservations. Building on the recent literature on hurdle and doublehurdle models, we propose a doubleinflated modelling framework, where the zero observations are allowed to come from: nonparticipants; participant misreporters (who have larger loss functions associated with a truthful response); and infrequent consumers. Due to our empirical application, the model is derived for the case of an ordered discrete dependent variable. However, it is similarly possible to augment other such zeroinflated models (zeroinflated count models, and doublehurdle models for continuous variables, for example). The model is then applied to a consumer choice problem of cannabis consumption. As expected we find that misreporting has a significant (estimated) effect on the recorded incidence of marijuana. Specifically, we find that 14% of the zeros reported in the survey is estimated to come from individuals who have misreported their participation. 
Keywords:  Ordered outcomes, discrete data, cannabis consumption, zeroinflated responses 
JEL:  C3 D1 I1 
Date:  2013–07 
URL:  http://d.repec.org/n?u=RePEc:ozl:bcecwp:wp1305&r=ecm 
By:  Brandon J. Bates; Mikkel PlagborgMÃ¸ller; James H. Stock; Mark W. Watson 
Abstract:  This paper considers the estimation of approximate dynamic factor models when there is temporal instability in the factor loadings. We characterize the type and magnitude of instabilities under which the principal components estimator of the factors is consistent and find that these instabilities can be larger than earlier theoretical calculations suggest. We also discuss implications of our results for the robustness of regressions based on the estimated factors and of estimates of the number of factors in the presence of parameter instability. Simulations calibrated to an empirical application indicate that instability in the factor loadings has a limited impact on estimation of the factor space and diffusion index forecasting, whereas estimation of the number of factors is more substantially affected. 
URL:  http://d.repec.org/n?u=RePEc:qsh:wpaper:84631&r=ecm 
By:  Heinemann A. (GSBE) 
Abstract:  This paper studies an alternative approach to construct confidence intervals for parameter estimates of the LeeCarter model. First, the procedure of obtaining confidence intervals using regular nonparametric i.i.d. bootstrap is specified. Empirical evidence seems to invalidate this approach as it indicates strong autocorrelation and cross correlation in the residuals. A more general approach is introduced, relying on the Sieve bootstrap method, that includes the nonparametric i.i.d. method as a special case. Secondly, this paper examines the performance of the nonparametric i.i.d. and the Sieve bootstrap approach. In an application to a Dutch data set, the Sieve bootstrap method returns much wider confidence intervals compared to the nonparametric i.i.d. approach. Neglecting the residuals dependency structure, the nonparametric i.i.d. bootstrap method seems to construct confidence intervals that are too narrow. Third, the paper discusses an intuitive explanation for the source of autocorrelation and cross correlation within stochastic mortality models. 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:dgr:umagsb:2013069&r=ecm 
By:  Luca Sala 
Abstract:  We use frequency domain techniques to estimate a mediumscale DSGE model on different frequency bands. We show that goodness of t, forecasting performance and parameter estimates vary substantially with the frequency bands over which the model is estimated. Estimates obtained using subsets of frequencies are characterized by signicantly different parameters, an indication that the model cannot match all frequencies with one set of parameters. In particular, we find that: i) the low frequency properties of the data strongly affect parameter estimates obtained in the time domain; ii) the importance of economic frictions in the model changes when different subsets of frequencies are used in estimation. This is particularly true for the investment cost friction and habit persistence: when low frequencies are present in the estimation, the investment cost friction and habit persistence are estimated to be higher than when low frequencies are absent. JEL Classication: C11, C32, E32 Keywords: DSGE models, frequency domain, band maximum likelihood. 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:igi:igierp:504&r=ecm 
By:  Tschernig, Rolf; Weber, Enzo; Weigand, Roland 
Abstract:  We state that longrun restrictions that identify structural shocks in VAR models with unit roots lose their original interpretation if the fractional integration order of the affected variable is below one. For such fractionally integrated models we consider a mediumrun approach that employs restrictions on variance contributions over finite horizons. We show for alternative identification schemes that letting the horizon tend to infinity is equivalent to imposing the restriction of Blanchard and Quah (1989) introduced for the unitroot case. 
Keywords:  Structural vector autoregression; longrun restriction; finitehorizon identification; fractional integration; impulse response function 
JEL:  C32 C50 
Date:  2013–12 
URL:  http://d.repec.org/n?u=RePEc:bay:rdwiwi:29162&r=ecm 
By:  Osmani Teixeira de Carvalho Guillén; Alain Hecq; João Victor Issler; Diogo Saraiva 
Abstract:  This paper has two original contributions. First, we show that PV relationships entail a weakform SCCF restriction, as in Hecq et al. (2006) and in Athanasopoulos et al. (2011), and implies a polynomial serial correlation common feature relationship (Cubadda and Hecq, 2001). These represent shortrun restrictions on the dynamic multivariate systems, something that has not been discussed before. Our second contribution relates to forecasting multivariate time series that are subject to PVM restrictions, which has a wide application in macroeconomics and finance. We benefit from previous work showing the benefits for forecasting when the shortrun dynamics of the system is constrained. The reason why appropriate commoncycle restrictions improve forecasting is because it finds linear combinations of the first differences of the data that cannot be forecast by past information. This embeds natural exclusion restrictions preventing the estimation of useless parameters, which would otherwise contribute to the increase of forecast variance with no expected reduction in bias. We applied the techniques discussed in this paper to data known to be subject to PV restrictions: the online series maintained and updated by Robert J. Shiller at http://www.econ.yale.edu/~shiller/data.htm. We focus on three different data sets. The first includes the levels of interest rates with long and short maturities, the second includes the level of real price and dividend for the S&P composite index, and the third includes the logarithmic transformation of prices and dividends. Our exhaustive investigation of six different multivariate models reveals that better forecasts can be achieved when restrictions are applied to them. Specifically, cointegration restrictions, and cointegration and weakform SCCF rank restrictions, as well as all the set of theoretical restrictions embedded in the PVM 
Date:  2013–10 
URL:  http://d.repec.org/n?u=RePEc:bcb:wpaper:330&r=ecm 
By:  Kevin D. Hoover 
Abstract:  The paper is a keynote lecture from the TilburgMadrid Conference on Hypothesis Tests: Foundations and Applications at the Universidad Nacional de Educación a Distancia (UNED) Madrid, Spain, 1516 December 2011. It addresses the role of tests of statistical hypotheses (specification tests) in selection of a statistically admissible model in which to evaluate economic hypotheses. The issue is formulated in the context of recent philosophical accounts on the nature of models and related to some results in the literature on specification search. 
Keywords:  statistical testing, hypothesis tests, models, generaltospecific specification search, optional stopping, severe tests, costs of search, costs of inference, extremebounds analysis, LSE econometric methodology 
JEL:  B41 C18 C12 C50 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:hec:heccee:20123&r=ecm 
By:  Kaoru Tone (National Graduate Institute for Policy Studies) 
Abstract:  In this paper, we propose new resampling models in data envelopment analysis (DEA). Input/output values are subject to change for several reasons, e.g., measurement errors, hysteretic factors, arbitrariness and so on. Furthermore, these variations differ in their input/output items and their decisionmaking units (DMU). Hence, DEA efficiency scores need to be examined by considering these factors. Resampling based on these variations is necessary for gauging the confidence interval of DEA scores. We propose three resampling models. The first one assumes downside and upside measurement error rates for each input/output, which are common to all DMUs. We resample data following the triangular distribution that the downside and upside errors indicate around the observed data. The second model utilizes historical data, e.g., pastpresent, for estimating data variations, imposing chronological order weights which are supplied by Lucas series (a variant of Fibonacci series). The last one deals with future prospects. This model aims at forecasting the future efficiency score and its confidence interval for each DMU. 
Date:  2013–12 
URL:  http://d.repec.org/n?u=RePEc:ngi:dpaper:1323&r=ecm 
By:  Jäckel, Christoph 
Abstract:  Over the last two decades, alternative expected return proxies have been proposed with substantially lower variation than realized returns. This helped to reduce parameter uncertainty and to identify many seemingly robust relations between expected returns and variables of interest, which would have gone unnoticed with the use of realized returns. In this study, I argue that these findings could be spurious due to the ignorance of model uncertainty: because a researcher does not know which of the many proposed proxies is measured with the least error, any inference conditional on only one proxy can lead to overconfident decisions. As a solution, I introduce a Bayesian model averaging (BMA) framework to directly incorporate model uncertainty into the statistical analysis. I employ this approach to three examples from the implied cost of capital (ICC) literature and show that the incorporation of model uncertainty can severely widen the coverage regions, thereby leveling the playing field between realized returns and alternative expected return proxies. 
Keywords:  Timevarying expected returns, implied cost of capital, asset pricing, model averaging, model selection 
JEL:  C11 G12 
Date:  2013–12–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:51978&r=ecm 
By:  Jensen, Mark J; Maheu, John M 
Abstract:  The relationship between risk and return is one of the most studied topics in finance. The majority of the literature is based on a linear, parametric relationship between expected returns and conditional volatility. However, there is no theoretical justification for the relationship to be linear. This paper models the contemporaneous relationship between market excess returns and logrealized variances nonparametrically with an infinite mixture representation of their joint distribution. With this nonparametric representation, the conditional distribution of excess returns given logrealized variance will also have a infinite mixture representation but with probabilities and arguments depending on the value of realized variance. Our nonparametric approach allows for deviation from Gaussianity by allowing for higher order nonzero moments. It also allows for a smooth nonlinear relationship between the conditional mean of excess returns and logrealized variance. Parsimony of our nonparametric approach is guaranteed by the almost surely discrete Dirichlet process prior used for the mixture weights and arguments. We find strong robust evidence of volatility feedback in monthly data. Once volatility feedback is accounted for, there is an unambiguous positive relationship between expected excess returns and expected logrealized variance. This relationship is nonlinear. Volatility feedback impacts the whole distribution and not just the conditional mean. 
Keywords:  Dirichlet process prior, MCMC, realized variance 
JEL:  C11 C3 C32 G1 G12 
Date:  2013–12 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:52132&r=ecm 
By:  Hancock, Ruth; Morciano, Marcello; Pudney, Stephen 
Abstract:  We propose a nonparametric matching approach to estimation of implicit costs based on the compensating variation (CV) principle. We apply the method to estimate the additional personal costs experienced by disabled older people in Great Britain, finding that those costs are substantial, averaging in the range 4861 a week, compared with the mean level of state disability benefit (28) or total public support (47) received. Estimated costs rise strongly with the severity of disability. We compare the nonparametric approach with the standard parametric method, finding that the latter tends to generate large overestimates unless conditions are ideal. The nonparametric approach has much to recommend it. 
Date:  2013–12–04 
URL:  http://d.repec.org/n?u=RePEc:ese:iserwp:201326&r=ecm 
By:  Rosa Bernardini Papalia (Università di Bologna); Annalisa Donno (Università di Bologna) 
Abstract:  The concept of competitiveness, for a long time considered as strictly connected to economic and financial performances, evolved, above all in recent years, toward new, wider interpretations disclosing its multidimensional nature. The shift to a multidimensional view of the phenomenon has excited an intense debate involving theoretical reflections on the features characterizing it, as well as methodological considerations on its assessment and measurement. The present research has a twofold objective: going in depth with the study of tangible and intangible aspect characterizing multidimensional competitive phenomena by assuming a microlevel point of view, and measuring competitiveness through a modelbased approach. Specifically, we propose a nonparametric approach to Structural Equation Models techniques for the computation of multidimensional composite measures. Structural Equation Models tools will be used for the development of the empirical application on the Italian case: a model based microlevel competitiveness indicator for the measurement of the phenomenon on a large sample of Italian small and medium enterprises will be constructed. 
Keywords:  Microlevel competitiveness, modelbased composite indicators, Structural Equation Models, intangible assets. Competitività, indicatori compositi, Modelli di Equazioni Strutturali, beni intangibili. 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:bot:quadip:123&r=ecm 
By:  Jaume BellesSampera (Faculty of Economics, University of Barcelona); Montserrat Guillén (Faculty of Economics, University of Barcelona); Miguel Santolino (Faculty of Economics, University of Barcelona) 
Abstract:  A new family of distortion risk measures GlueVaR is proposed in Belles Sampera et al. (2013) to procure a risk assessment lying between those provided by common quantilebased risk measures. GlueVaR risk measures may be expressed as a combination of these standard risk measures. We show here that this relationship may be used to obtain approximations of GlueVaR measures for general skewed distribution functions using the CornishFisher expansion. A subfamily of GlueVaR measures satises the tailsubadditivity property. An example of risk measurement based on real insurance claim data is presented, where implications of tailsubadditivity in the aggregation of risks are illustrated. 
Keywords:  quantiles, subadditivity, tails, risk management, ValueatRisk. JEL classification: 
Date:  2013–12 
URL:  http://d.repec.org/n?u=RePEc:ira:wpaper:201323&r=ecm 
By:  Sophocles Mavroeidis; Mikkel PlagborgMÃ¸ller; James H. Stock 
Abstract:  We review the main identification strategies and empirical evidence on the role of expectations in the new Keynesian Phillips curve, paying particular attention to the issue of weak identification. Our goal is to provide a clear understanding of the role of expectations that integrates across the different papers and specifications in the literature. We discuss the properties of the various limited information econometric methods used in the literature and provide explanations of why they produce conflicting results. Using a common data set and a flexible empirical approach, we find that researchers are faced with substantial specification uncertainty, as different combinations of various a priori reasonable specification choices give rise to a vast set of point estimates. Moreover, given a specification, estimation is subject to considerable sampling uncertainty due to weak identification. We highlight the assumptions that seem to matter most for identification and the configuration of point estimates. We conclude that the literature has reached a limit on how much can be learned about the new Keynesian Phillips curve from aggregate macroeconomic time series. New identification approaches and new data sets are needed to reach an empirical consensus. 
URL:  http://d.repec.org/n?u=RePEc:qsh:wpaper:84656&r=ecm 
By:  Kevin D. Hoover 
Abstract:  Trygve Haavelmo’s The Probability Approach in Econometrics (1944) has been widely regarded as the foundation document of modern econometrics. Nevertheless, its significance has been interpreted in widely different ways. Some modern economists regard it as a blueprint for a provocative, but ultimately unsuccessful, program dominated by the need for a priori theoretical identification of econometric models. They call for new techniques that better acknowledge the interrelationship of theory and data. Others credit Haavelmo with an approach that focuses on statistical adequacy rather than theoretical identification. They see many of Haavelmo’s deepest insights as having been unduly neglected. The current paper uses bibliometric techniques and a close reading of econometrics articles and textbooks to trace the way in which the economics profession received, interpreted, and transmitted Haavelmo’s ideas. A key irony is that the first group calls for a reform of econometric thinking that goes several steps beyond Haavelmo’s initial vision; while the second group argues that essentially what the first group advocates was already in Haavelmo’s Probability Approach from the beginning. 
Keywords:  Trygve Haavelmo, econometrics, history of econometrics, the probability approach, econometric methodology, Cowles Commission 
JEL:  B23 B40 C10 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:hec:heccee:20124&r=ecm 
By:  Kasy, Maximilian 
Date:  2013–01 
URL:  http://d.repec.org/n?u=RePEc:qsh:wpaper:33257&r=ecm 