nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒06‒02
twenty-one papers chosen by
Sune Karlsson
Orebro University

  1. Testing for Cointegration with Temporally Aggregated and Mixed-frequency Time Series By Ghysels, Eric; Miller, J. Isaac
  2. The pairwise approach to model a large set of disaggregates with common trends By Guillermo Carlomagnol; Antoni Espasa
  3. On the Validity of Edgeworth Expansions and Moment Approximations for Three Indirect Inference Estimators By Stelios Arvanitis; Antonis Demos
  4. A fixed-T version of Breitung's panel data unit root test and its asymptotic local power By Yiannis Karavias; Elias Tzavalis
  5. Markov-Switching Mixed-Frequency VAR Models By Foroni, Claudia; Guérin, Pierre; Marcellino, Massimiliano
  6. Testing for Granger Causality with Mixed Frequency Data By Ghysels, Eric; Hill, Jonathan B.; Motegi, Kaiji
  7. Mixed Tempered Stable distribution By Edit Rroji; Lorenzo Mercuri
  8. Fixed-smoothing Asymptotics and Asymptotic F and t Tests in the Presence of Strong Autocorrelation By Sun, Yixiao
  9. Inference Based on SVAR Identified with Sign and Zero Restrictions: Theory and Applications By Arias, Jonas E.; Rubio-Ramírez, Juan Francisco; Waggoner, Daniel F
  10. Analyzing business and financial cycles using multi-level factor models By Jörg Breitung; Sandra Eickmeier
  11. Optimal personalized treatment rules for marketing interventions: A review of methods, a new proposal, and an insurance case study By Leo Guelman; Montserrat Guillen; Ana M. Pérez-Marín
  12. Structural analysis with independent innovations By Herwartz, Helmut
  13. Structural FECM: Cointegration in large-scale structural FAVAR models By Banerjee, Anindya; Marcellino, Massimiliano; Masten, Igor
  14. Persistent and Transient Productive Inefficiency: A Maximum Simulated Likelihood Approach By Massimo Filippini; William Greene
  15. Gravity model estimation: Fixed effects vs. random intercept poisson pseudo maximum likelihood By Prehn, Sören; Brümmer, Bernhard; Glauben, Thomas
  16. The seeming unreliability of rank-ordered data as a consequence of model misspecification By Yan, Jin; Yoo, Hong Il
  17. Methods for Measuring Expectations and Uncertainty in Markov-Switching Models By Bianchi, Francesco
  18. No Arbitrage Priors, Drifting Volatilities, and the Term Structure of Interest Rates By Carriero, Andrea; Clark, Todd; Marcellino, Massimiliano
  19. On the stationarity of Dynamic Conditional Correlation models By Jean-David Fermanian; Hassan Malongo
  20. Autoregressive augmentation of MIDAS regressions By Cláudia Duarte
  21. Understanding Variation in Treatment Effects in Education Impact Evaluations: An Overview of Quantitative Methods. By Peter Z. Schochet; Mike Puma; John Deke

  1. By: Ghysels, Eric; Miller, J. Isaac
    Abstract: We examine the effects of mixed sampling frequencies and temporal aggregation on standard tests for cointegration. We find that the effects of aggregation on the size of the tests may be severe. Matching sampling schemes of all series generally reduces size, and the nominal size is obtained when all series are skip sampled in the same way. When matching all schemes is not feasible, but when some high-frequency data are available, we show how to use mixed-frequency models to improve the size distortion of the tests. We test stock prices and dividends for cointegration as an empirical demonstration.
    Keywords: cointegration; mixed sampling frequencies; residual-based cointegration test; temporal aggregation; trace test
    JEL: C12 C32
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:9654&r=ecm
  2. By: Guillermo Carlomagnol; Antoni Espasa
    Abstract: The objective of this paper is to model and forecast all the components of a macro orbusiness variable. Our contribution concerns cases with a large number (hundreds) ofcomponents where multivariate approaches are not feasible. We extend in several directions the pairwise approach originally proposed by Espasa and Mayo-Burgos(2013) and study its statistical properties. The pairwise approach consists on performing common features tests between the N(N-1)/2 pairs of series that exist in a group of N of them. Once this is done, groups of series that share common features can be formed. Next, all the components are forecast using single equation models that include the restrictions derived by the common features. In this paper we focus on discovering groups of components that share single common trends. The asymptotic properties of the procedure are studied analytically. Monte Carlo evidence on the small samples performance is provided and a small samples correction procedure designed. A comparison with a DFM alternative is also carried out, and results indicate that the pairwise approach dominates in many empirically relevant situations. A relevant advantage of the pairwise approach is that it does not need common features to be pervasive. A strategy for dealing with outliers and breaks in the context of the pairwise procedure is designed and its properties studied by Monte Carlo. Results indicate that the treatment of these observations may considerably improve the procedure's performance when series are 'contaminated'.
    Keywords: Common trends, Cointegration, Factor Models, Disaggregation, Forecast model selection, Forecast Combination, Outliers treatment
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws141309&r=ecm
  3. By: Stelios Arvanitis; Antonis Demos (www.aueb.gr/users/demos)
    Abstract: This paper deals with higher order asymptotic properties for three indirect inference estimators. We provide conditions that ensure the validity of locally uniform Edgeworth approximations. When these are of sufficiently high order they also form integrability conditions that validate locally uniform moment approximations. We derive the relevant 2nd order bias and MSE approximations for the three estimators as functions of the respective approximations for the auxiliary estimator. We allow the possibility of stochastic weighting in any of the steps of the estimation procedure. We confirm that in the special case of deterministic weighting and affinity of the binding function, one of them is second order unbiased. The other two estimators do not have this property under the same conditions. Moreover, in this case, the second order approximate MSEs imply the superiority of the first estimator. We generalize to multistep procedures that provide recursive indirect estimators which are locally uniformly unbiased at any given order. Furthermore, in a particular case, we manage to validate locally uniform Edgeworth expansions for one of the estimators without any differentiability requirements for the estimating equations.
    Keywords: Locally Uniform Edgeworth Expansions, Locally Uniform Moment Approximations, Bias Approximation, MSE Approximation, Binding Function, Recursive Indirect Estimators
    JEL: C10 C13
    Date: 2014–05–20
    URL: http://d.repec.org/n?u=RePEc:aue:wpaper:1406&r=ecm
  4. By: Yiannis Karavias; Elias Tzavalis
    Abstract: We extend Breitung's (2000) large-T panel data unit root test to the case of fixed time dimension while still allowing for heteroscedastic and serially correlated error terms. The analytic local power function of the new test is derived assuming that only the cross section dimension of the panel grows large. It is found that if the errors are serially uncorrelated the test also has trivial power, but, if not, this is no longer the case. Monte Carlo experiments show that the suggested test is more powerful than its large-T, original version when the number of cross section units is moderate or large, regardless of the number of time series observations.
    Keywords: Panel unit root; local power function; serial correlation; incidental trends JEL classi?cation: C22, C23
    URL: http://d.repec.org/n?u=RePEc:not:notgts:14/02&r=ecm
  5. By: Foroni, Claudia; Guérin, Pierre; Marcellino, Massimiliano
    Abstract: This paper introduces regime switching parameters in the Mixed-Frequency VAR model. We first discuss estimation and inference for Markov-switching Mixed-Frequency VAR (MSMF-VAR) models. Next, we assess the finite sample performance of the technique in Monte-Carlo experiments. Finally, the MSMF-VAR model is applied to predict GDP growth and business cycle turning points in the euro area. Its performance is compared with that of a number of competing models, including linear and regime switching mixed data sampling (MIDAS) models. The results suggest that MSMF-VAR models are particularly useful to estimate the status of economic activity.
    Keywords: Fore-; Markov-switching; MIDAS; Mixed-frequency VAR; Nowcasting
    JEL: C53 E32 E37
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:9815&r=ecm
  6. By: Ghysels, Eric; Hill, Jonathan B.; Motegi, Kaiji
    Abstract: It is well known that temporal aggregation has adverse effects on Granger causality tests. Time series are often sampled at different frequencies. This is typically ignored, and data are merely aggregated to the common lowest frequency. We develop a set of Granger causality tests that explicitly take advantage of data sampled at different frequencies. We show that taking advantage of mixed frequency data allows us to better recover causal relationships when compared to the conventional common low frequency approach. We also show that the mixed frequency causality tests have higher local asymptotic power as well as more power in finite samples compared to conventional tests.
    Keywords: Granger causality; mixed data sampling (MIDAS); temporal aggression; vector autoregression (VAR)
    JEL: C12 C32
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:9655&r=ecm
  7. By: Edit Rroji; Lorenzo Mercuri
    Abstract: In this paper we introduce a new parametric distribution, the Mixed Tempered Stable. It has the same structure of the Normal Variance Mean Mixtures but the normality assumption leaves place to a semi-heavy tailed distribution. We show that, by choosing appropriately the parameters of the distribution and under the concrete specification of the mixing random variable, it is possible to obtain some well-known distributions as special cases. We employ the Mixed Tempered Stable distribution which has many attractive features for modeling univariate returns. Our results suggest that it is enough flexible to accomodate different density shapes. Furthermore, the analysis applied to statistical time series shows that our approach provides a better fit than competing distributions that are common in the practice of finance.
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1405.7603&r=ecm
  8. By: Sun, Yixiao
    Abstract: New asymptotic approximations are established for the Wald and t statistics in the presence of unknown but strong autocorrelation. The asymptotic theory extends the usual fixed-smoothing asymptotics under weak dependence to allow for near unit root and weak unit root processes. As the locality parameter that characterizes the neighborhood of the autoregressive root increases from zero to infinity, the new fixed-smoothing asymptotic distribution changes smoothly from the unit-root fixed-smoothing asymptotics to the usual fixed-smoothing asymptotics under weak dependence. Simulations show that the new approximation is more accurate than the usual fixed-smoothing approximation.
    Keywords: Social and Behavioral Sciences, Autocorrelation Robust Test, Fixed-smoothing Asymptotics, Local-to-Unity, Strong Autocorrelation, Weak Unit Root
    Date: 2014–05–27
    URL: http://d.repec.org/n?u=RePEc:cdl:ucsdec:qt8479f4s2&r=ecm
  9. By: Arias, Jonas E.; Rubio-Ramírez, Juan Francisco; Waggoner, Daniel F
    Abstract: Are optimism shocks an important source of business cycle fluctuations? Are deficit-financed tax cuts better than deficit-financed spending to increase output? These questions have been previously studied using SVARs identified with sign and zero restrictions and the answers have been positive and definite in both cases. While the identification of SVARs with sign and zero restrictions is theoretically attractive because it allows the researcher to remain agnostic with respect to the responses of the key variables of interest, we show that current implementation of these techniques does not respect the agnosticism of the theory. These algorithms impose additional sign restrictions on variables that are seemingly unrestricted that bias the results and produce misleading confidence intervals. We provide an alternative and efficient algorithm that does not introduce any additional sign restriction, hence preserving the agnosticism of the theory. Without the additional restrictions, it is hard to support the claim that either optimism shocks are an important source of business cycle fluctuations or deficit-financed tax cuts work best at improving output. Our algorithm is not only correct but also faster than current ones.
    Keywords: Fiscal Shocks; Optimism; Sign and Zero Restrictions; SVARs
    JEL: C10
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:9796&r=ecm
  10. By: Jörg Breitung; Sandra Eickmeier
    Abstract: This paper compares alternative estimation procedures for multi-level factor models which imply blocks of zero restrictions on the associated matrix of factor loadings. We suggest a sequential least squares algorithm for minimizing the total sum of squared residuals and a two-step approach based on canonical correlations that are much simpler and faster than Bayesian approaches previously employed in the literature. Monte Carlo simulations suggest that the estimators perform well in typical sample sizes encountered in the factor analysis of macroeconomic data sets. We apply the methodologies to study international co-movements of business and financial cycles as well as asymmetries over the business cycle in the US.
    Keywords: Factor models, canonical correlations, international business cycles, financial cycles, business cycle asymmetries
    JEL: C38
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2014-43&r=ecm
  11. By: Leo Guelman (Royal Bank of Canada, RBC Insurance); Montserrat Guillen (Department of Econometrics, Riskcenter-IREA, Universitat de Barcelona); Ana M. Pérez-Marín (Department of Econometrics, Riskcenter-IREA, Universitat de Barcelona)
    Abstract: In many important settings, subjects can show signicant heterogeneity in response to a stimulus or treatment". For instance, a treatment that works for the overall population might be highly ineective, or even harmful, for a subgroup of subjects with specic characteristics. Similarly, a new treatment may not be better than an existing treatment in the overall population, but there is likely a subgroup of subjects who would benet from it. The notion that "one size may not fit all" is becoming increasingly recognized in a wide variety of elds, ranging from economics to medicine. This has drawn signicant attention to personalize the choice of treatment, so it is optimal for each individual. An optimal personalized treatment is the one that maximizes the probability of a desirable outcome. We call the task of learning the optimal personalized treatment "personalized treatment learning". From the statistical learning perspective, this problem imposes some challenges, primarily because the optimal treatment is unknown on a given training set. A number of statistical methods have been proposed recently to tackle this problem. However, to the best of our knowledge, there has been no attempt so far to provide a comprehensive view of these methods and to benchmark their performance. The purpose of this paper is twofold: i) to describe seven recently proposed methods for personalized treatment learning and compare their performance on an extensive numerical study, and ii) to propose a novel method labeled causal conditional inference trees and its natural extension to causal conditional inference forests. The results show that our new proposed method often outperforms the alternatives on the numerical settings described in this article. We also illustrate an application of the proposed method using data from a large Canadian insurer for the purpose of selecting the best targets for cross-selling an insurance product.
    Keywords: personalized treatment learning, causal inference, marketing interventions
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:bak:wpaper:201406&r=ecm
  12. By: Herwartz, Helmut
    Abstract: Structural innovations in multivariate dynamic systems are typically hidden and often identified by means of a-priori economic reasoning. Under multivariate Gaussian model innovations there is no loss measure available to distinguish alternative orderings of variables or, put differently, between particular identifying restrictions and rotations thereof. Based on a non Gaussian framework of independent innovations, a loss statistic is proposed in this paper that allows to discriminate between alternative identifying assumptions on the basis of nonparametric density estimates. The merits of the proposed identification strategy are illustrated by means of a Monte Carlo study. Real data applications cover bivariate systems comprising US stock prices and total factor productivity, and four couples of international breakeven inflation rates to investigate monetary autonomy of the Bank of Canada and the Bank of England. --
    Keywords: structural innovations,identifying assumptions,SVAR,Cholesky decomposition,news shocks,monetary independence
    JEL: C32 G15
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:zbw:cegedp:208&r=ecm
  13. By: Banerjee, Anindya; Marcellino, Massimiliano; Masten, Igor
    Abstract: Starting from the dynamic factor model for non-stationary data we derive the factor-augmented error correction model (FECM) and, by generalizing the Granger representation theorem, its moving-average representation. The latter is used for the identification of structural shocks and their propagation mechanism. Besides discussing contemporaneous restrictions along the lines of Bernanke et al. (2005), we show how to implement classical identification schemes based on long-run restrictions in the case of large panels. The importance of the error-correction mechanism for impulse response analysis is analysed by means of both empirical examples and simulation experiments. Our results show that the bias in estimated impulse responses in a FAVAR model is positively related to the strength of the error-correction mechanism and the cross-section dimension of the panel. We observe empirically in a large panel of US data that these features have a substantial effect on the responses of several variables to the identified real shock.
    Keywords: Cointegration; Dynamic Factor Models; Factor-augmented Error Correction Models; FAVAR; Structural Analysis
    JEL: C32 E17
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:9858&r=ecm
  14. By: Massimo Filippini (ETH Zurich, Switzerland); William Greene (Stern School of Business, New York University, USA)
    Abstract: The productive efficiency of a firm can be seen as composed of two parts, one persistent and one transient. The received empirical literature on the measurement of productive efficiency has paid relatively little attention to the difference between these two components. Ahn, Good and Sickles (2000) suggested some approaches that pointed in this direction. The possibility was also raised in Greene (2004), who expressed some pessimism over the possibility of distinguishing the two empirically. Recently, Colombi (2010) and Kumbhakar and Tsionas (2012), in a milestone extension of the stochastic frontier methodology have proposed a tractable model based on panel data the promises to provide separate estimates of the two components of efficiency. The approach developed in the original presentation proved very cumbersome actually to implement in practice. Colombi (2010) notes that FIML estimation of the model is ‘complex and time consuming.’ In the sequence of papers, Colombi (2010), Colombi et al. (2011, 2014), Kumbhakar, Lien and Hardaker (2012) and Kumbhakar and Tsionas (2012) have suggested other strategies, including a four step least squares method. The main point of this paper is that full maximum likelihood estimation of the model is neither complex nor time consuming. The extreme complexity of the log likelihood noted in Colombi (2010), Colombi et al. (2011, 2014) is reduced by using simulation and exploiting the Butler and Moffitt (1982) formulation. In this paper, we develop a practical full information maximum simulated likelihood estimator for the model. The approach is very effective and strikingly simple to apply, and uses all of the sample distributional information to obtain the estimates. We also implement the panel data counterpart of the JLMS (1982) estimator for technical or cost inefficiency. The technique is applied in a study of the cost efficiency of Swiss railways.
    Keywords: productive efficiency; stochastic frontier analysis; panel data; transient and persistent efficiency.
    JEL: C1 C23 D2 D24
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:eth:wpswif:14-197&r=ecm
  15. By: Prehn, Sören; Brümmer, Bernhard; Glauben, Thomas
    Abstract: Since the work of FEENSTRA (2002), the standard ANDERSON & VAN WINCOOP (2003) Gravity Model has been estimated using a fixed effects approach. However, a fixed effects approach has a major drawback: it does not allow for the estimation of exporter- and importer-invariant variables. Thus, economically relevant variables such as exporter and importer gross domestic product are disregarded. Here, we propose a random intercept model to address this gap. This approach not only provides identical estimates to a fixed effects approach, but also allows for the estimation of exporter- and importer-invariant variables. -- Seit FEENSTRA (2002) wird das Standard ANDERSON & VAN WINCOOP (2003) Gravity Modell mittels Fixed-Effects-Ansatzes geschätzt. Der Fixed-Effects-Ansatz hat allerdings den entscheidenden Nachteil, dass er nicht die Schätzung von Exporteur- und Importeur-invarianten Variablen erlaubt. Folglich lassen sich ökonomisch relevante Variablen wie Exporteur- und Importeur-Bruttosozialprodukt nicht berücksichtigen. Wir empfehlen ein Random-Intercept-Modell anstatt. Dieser Ansatz liefert nicht nur die identischen Schätzer wie ein Fixed-Effects-Modell, sondern erlaubt auch die Schätzung von Exporteur- und Importeur-invarianten Variablen.
    Keywords: Gravity Model Estimation,Poisson Pseudo Maximum Likelihood,Fixed Effects Model,Random Intercept Model
    JEL: F1 C3
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:zbw:iamodp:148&r=ecm
  16. By: Yan, Jin; Yoo, Hong Il
    Abstract: The rank-ordered logit model's coefficients often vary significantly with the depth of rankings used in the estimation process. The common interpretation of the unstable coefficients across ranks is that survey respondents state their more and less preferred alternatives in an incoherent manner. We point out another source of the same empirical regularity: stochastic misspecification of the random utility function. An example is provided to show how the well-known symptoms of incoherent ranking behavior can result from stochastic misspecification, followed by Monte Carlo evidence. Our finding implies that the empirical regularity can be addressed by the development of robust estimation methods.
    Keywords: rank-ordered logit, exploded logit, ranking, qualitative response,stated preference
    JEL: C25 C52 C81
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:56285&r=ecm
  17. By: Bianchi, Francesco
    Abstract: I develop a toolbox to analyze the properties of multivariate Markov-switching models. I first derive analytical formulas for the evolution of first and second moments, taking into account the possibility of regime changes. The formulas are then used to characterize the evolution of expectations and uncertainty, the propagation of the shocks, the contribution of the shocks to the overall volatility, and the welfare implications of regime changes in general equilibrium models. Then, I show how the methods can be used to capture the link between uncertainty and the state of the economy. Finally, I generalize Campbell's VAR implementation of Campbell and Shiller's present value decomposition to allow for parameter instability. The applications reveal the importance of taking into account the effects of regime changes on agents' expectations, welfare, and uncertainty. All results are derived analytically, do not require numerical integration, and are therefore suitable for structural estimation.
    Keywords: Bayesian Methods; DSGE; Impulse responses; Markov-switching; Uncertainty; VAR; Welfare
    JEL: C11 C32 E31 E52 G12
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:9705&r=ecm
  18. By: Carriero, Andrea; Clark, Todd; Marcellino, Massimiliano
    Abstract: We propose a method to produce density forecasts of the term structure of government bond yields that accounts for (i) the possible mispecification of an underlying Gaussian Affine Term Structure Model (GATSM) and (ii) the time varying volatility of interest rates. For this, we derive a Bayesian prior from a GATSM and use it to estimate the coefficients of a BVAR for the term structure, specifying a common, multiplicative, time varying volatility for the VAR disturbances. Results based on U.S. data show that this method significantly improves the precision of point and density forecasts of the term structure. While this paper focuses on term structure modelling, the proposed method can be applied for a wide range of alternative models, including DSGE models, and is a generalization of the method of Del Negro and Schorfheide (2004) to VARs featuring drifting volatilities. The method also generalizes the model of Giannone et al. (2012), by specifying hierarchically not only the prior variance but also the prior mean of the VAR coefficients. Our results show that both time variation in volatilities, and a hierarchical specification for the prior means, improve model fit and forecasting performance.
    Keywords: density forecasting; no arbitrage; stochastic volatility; Term structure
    JEL: C32 C53 G17
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:9848&r=ecm
  19. By: Jean-David Fermanian; Hassan Malongo
    Abstract: We provide conditions for the existence and the unicity of strictly stationary solutions of the usual Dynamic Conditional Correlation GARCH models (DCC-GARCH). The proof is based on Tweedie's (1988) criteria, after having rewritten DCC-GARCH models as nonlinear Markov chains. Moreover, we study the existence of their finite moments.
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1405.6905&r=ecm
  20. By: Cláudia Duarte
    Abstract: Focusing on the MI(xed) DA(ta) S(ampling) regressions for handling different sampling frequencies and asynchronous releases of information, alternative techniques for the autoregressive augmentation of these regressions are presented and discussed. For forecasting quarterly euro area GDP growth using a small set of selected indicators, the results obtained suggest that no specific kind of MIDAS regressions clearly dominates in terms of forecast accuracy. Nevertheless, alternatives to common-factor MIDAS regressions with autoregressive terms perform well and in some cases are the best performing regressions.
    JEL: C53 E37
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:ptu:wpaper:w201401&r=ecm
  21. By: Peter Z. Schochet; Mike Puma; John Deke
    Abstract: Variation in treatment effects has important implications for education practice—and for facilitating the most efficient use of limited resources—by informing decisions about how best to target interventions and how to improve the design or implementation of interventions. Understanding variation in effects is also critical to assessing how study findings may be generalized to other education environments. A new report, co-authored by Mathematica's education experts, can help researchers understand which methods to use to examine how much treatment effects vary across students, educators, and sites. The report summarizes the complex research literature on quantitative methods for assessing variation in treatment effects and provides technical guidance about the use and interpretation of these methods.
    Keywords: Randomized Controlled Trials, Education Interventions, Quantitative Methods, Heterogeneity of Treatment Effects
    JEL: I
    Date: 2014–05–30
    URL: http://d.repec.org/n?u=RePEc:mpr:mprres:8128&r=ecm

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.