nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒01‒25
twenty-two papers chosen by
Sune Karlsson
Orebro University

  1. A test for a new modelling : The Univariate MT-STAR Model By Peter Martey Addo; Monica Billio; Dominique Guegan
  2. Estimation of treatment effects with high-dimensional controls By Alexandre Belloni; Victor Chernozhukov; Christian Hansen
  3. Conditional stochastic dominance testing By Miguel A. Delgado; Juan Carlos Escanciano
  4. On moment conditions for quasi-maximum likelihood estimation of multivariate ARCH models By Marco Avarucci; Eric Beutner; Paolo Zaffaroni
  5. Exact Asymptotic Goodness-of-Fit Testing For Discrete Circular Data, With Applications By David E. Giles
  6. "Mixed Effects Prediction under Benchmarking and Applications to Small Area Estimation" By Tatsuya Kubokawa
  7. Inference for high-dimensional sparse econometric models By Alexandre Belloni; Victor Chernozhukov; Christian Hansen
  8. A note on the estimation of long-run relationships in panel equations with cross-section linkages By Di Iorio, Francesca; Fachin, Stefano
  9. Bayesian Semi-parametric Expected Shortfall Forecasting in Financial Markets By Richard H. Gerlach; Cathy W.S. Chen; Liou-Yan Lin
  10. Identification, data combination and the risk of disclosure By Tatiana Komarova; Denis Nekipelov; Evgeny Yakovlev
  11. Prior Selection for Vector Autoregressions By Domenico Giannone; Michèle Lenza; Giorgio E. Primiceri
  12. Inference for extremal conditional quantile models, with an application to market and birthweight risks By Victor Chernozhukov; Iván Fernández-Val
  13. On the Applicability of the Sieve Bootstrap in Time series Panels By Smeekes Stephan; Urbain Jean-Pierre
  14. How are rescaled range analyses affected by different memory and distributional properties? A Monte Carlo study By Ladislav Kristoufek
  15. Analysis of interactive fixed effects dynamic linear panel regression with measurement error By Nayoung Lee; Hyungsik Roger Moon; Martin Weidner
  16. Local Constant and Local Bilinear Multiple-Output Quantile Regression By Marc Hallin; Zudi Lu; Davy Paindaveine; Miroslav Siman
  17. Using panel data to partially identify HIV prevalence when HIV status is not missing at random By Bruno Arpino; Elisabetta De Cao; Franco Peracchi
  18. Parameter Estimation using Empirical Likelihood combined with Market Information By Steven Kou; Tony Sit; Zhiliang Ying
  19. Mathematical Genesis of the Spatio-Temporal Covariance Functions By Fernández-Avilés, G; Montero, JM; Mateu, J
  20. Forecast combination for discrete choice models: predicting FOMC monetary policy decisions By Laurent Pauwels; Andrey Vasnev
  21. The Two-sided Weibull Distribution and Forecasting Financial Tail Risk By Richard Gerlach; Qian Chen
  22. From Correlation to Granger Causality By David Stern

  1. By: Peter Martey Addo (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon Sorbonne); Monica Billio (Università Ca' Foscari of Venice - Department of Economics); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon Sorbonne, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris)
    Abstract: In ESTAR models it is usually quite difficult to obtain parameter estimates, as it is discussed in the literature. The problem of properly distinguishing the transition function in relation to extreme parameter combinations often leads to getting strongly biased estimators. This paper proposes a new procedure to test for the unit root in a nonlinear framework, and contributes to the existing literature in three separate directions. First, we propose a new alternative model - the MT-STAR model - which has similar properties as the ESTAR model but reduces the effects of the identification problem and can also account for cases where the adjustment mechanism towards equilibrium is not symmetric. Second, we develop a testing procedure to detect the presence of a nonlinear stationary process by establishing the limiting non-standard asymptotic distributions of the proposed test-statistics. Finally, we perform Monte Carlo simulations to assess the small sample performance of the test and then to highlight its power gain over existing tests for a unit root. We proposed two applications.
    Keywords: Nonlinearity, smooth transition, unit root testing, Monte Carlo simulations.
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00659158&r=ecm
  2. By: Alexandre Belloni; Victor Chernozhukov (Institute for Fiscal Studies and MIT); Christian Hansen (Institute for Fiscal Studies and Chicago GSB)
    Abstract: <p>We propose methods for inference on the average effect of a treatment on a scalar outcome in the presence of very many controls. Our setting is a partially linear regression model containing the treatment/policy variable and a large number p of controls or series terms, with p that is possibly much larger than the sample size n, but where only s << n unknown controls or series terms are needed to approximate the regression function accurately. The latter sparsity condition makes it possible to estimate the entire regression function as well as the average treatment effect by selecting an approximately the right set of controls using Lasso and related methods. We develop estimation and inference methods for the average treatment effect in this setting, proposing a novel "post double selection" method that provides attractive inferential and estimation properties. In our analysis, in order to cover realistic applications, we expressly allow for imperfect selection of the controls and account for the impact of selection errors on estimation and inference. In order to cover typical applications in economics, we employ the selection methods designed to deal with non-Gaussian and heteroscedastic disturbances. We illustrate the use of new methods with numerical simulations and an application to the effect of abortion on crime rates.</p>
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:42/11&r=ecm
  3. By: Miguel A. Delgado; Juan Carlos Escanciano
    Abstract: This article proposes bootstrap-based stochastic dominance tests for nonparametric conditional distributions and their moments. We exploit the fact that a conditional distribution dominates the other if and only if the difference between the marginal joint distributions is monotonic in the explanatory variable for each value of the dependent variable. The proposed test statistic compares restricted and unrestricted estimators of the difference between the joint distributions, and can be implemented under minimal smoothness requirements on the underlying nonparametric curves and without resorting to smooth estimation. The finite sample properties of the proposed tests are examined by means of a Monte Carlo study. We report an application to studying the impact on post-intervention earnings of the National Supported Work Demonstration, a randomized labor training program carried out in the 1970s.
    Keywords: Nonparametric testing, Conditional stochastic dominance, Conditional inequality restrictions, Least concave majorant, Treatment effects
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we1138&r=ecm
  4. By: Marco Avarucci (Maastricht University); Eric Beutner (Maastricht University); Paolo Zaffaroni (Imperial College London and Università di Roma "La Sapienza")
    Abstract: This paper questions whether it is possible to derive consistency and asymptotic normality of the Gaussian quasi-maximum likelihood estimator (QMLE) for possibly the simplest VEC-GARCH model, namely the multivariate ARCH(1) model of the BEKK form, under weak moment conditions similar to the univariate case. In contrast to the univariate specification, we show that the expectation of the loglikelihood function is unbounded, away from the true parameter value, if (and only if) the observable has unbounded second moment. Despite this non-standard feature, consistency of the Gaussian QMLE is still warranted. The same moment condition proves to be necessary and sucient for the stationarity of the score, when evaluated at the true parameter value. This explains why high moment conditions, typically bounded sixth moment and above, have been used hitherto in the literature to establish the asymptotic normality of the QMLE in the multivariate framework.
    Keywords: multivariate ARCH models. moment conditions. VEC-GARCH.
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:sas:wpaper:20121&r=ecm
  5. By: David E. Giles (Department of Economics, University of Victoria)
    Abstract: We show that the full asymptotic null distribution for Watson’s 2N U statistic, modified for discrete data, can be computed simply and exactly by standard methods. Previous approximate quantiles for the uniform multinomial case are found to be accurate. More extensive quantiles are presented for this distribution, as well as for the beta-binomial distribution and for the distributions associated with “Benford’s Laws”. A simulation experiment compares the power of the modified 2N U test with that of Kuiper’s VN test. In addition, four illustrative empirical applications are provided. In addition, four illustrative empirical applications are provided to illustrate the usefulness of the 2N U test. (This paper supercedes EWP0607.)
    Keywords: Distributions on the circle; Goodness-of-fit; Watson’s 2N U; Discrete data; Benford’s Law
    JEL: C12 C16 C46
    Date: 2012–01–12
    URL: http://d.repec.org/n?u=RePEc:vic:vicewp:1201&r=ecm
  6. By: Tatsuya Kubokawa (Faculty of Economics, University of Tokyo)
    Abstract: The empirical best linear unbiased predictor (EBLUP) in the linear mixed model (LMM) is useful for the small area estimation in the sense of increasing the precision of estimation of small area means. However, one potential difficulty of EBLUP is that when aggregated, the overall estimate for a larger geographical area may be quite different from the corresponding direct estimate like the overall sample mean. One way to solve this problem is the benchmarking approach, and the constrained EBLUP is a feasible solution which satisfies the constraints that the aggregated mean and variance are identical to the requested values of mean and variance. An interesting query is whether the constrained EBLUP may have a larger estimation error than EBLUP. In this paper, we address this issue by deriving asymptotic approximations of MSE of the constrained EBLUP. Also, we provide asymptotic unbiased estimators of the MSE of the constrained EBLUP based on the parametric bootstrap method, and establish their second-order justification. Finally, the performances of the suggested MSE estimators are numerically investigated.
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2012cf832&r=ecm
  7. By: Alexandre Belloni; Victor Chernozhukov (Institute for Fiscal Studies and MIT); Christian Hansen (Institute for Fiscal Studies and Chicago GSB)
    Abstract: <p><p>This article is about estimation and inference methods for high dimensional sparse (HDS) regression models in econometrics. High dimensional sparse models arise in situations where many regressors (or series terms) are available and the regression function is well-approximated by a parsimonious, yet unknown set of regressors. The latter condition makes it possible to estimate the entire regression function effectively by searching for approximately the right set of regressors. We discuss methods for identifying this set of regressors and estimating their coefficients based on l1 -penalization and describe key theoretical results. In order to capture realistic practical situations, we expressly allow for imperfect selection of regressors and study the impact of this imperfect selection on estimation and inference results. We focus the main part of the article on the use of HDS models and methods in the instrumental variables model and the partially linear model. We present a set of novel inference results for these models and illustrate their use with applications to returns to schooling and growth regression.</p></p>
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:41/11&r=ecm
  8. By: Di Iorio, Francesca; Fachin, Stefano
    Abstract: We address the issue of estimation and inference in dependent non-stationary panels of small cross-section dimensions. The main conclusion is that the best results are obtained applying bootstrap inference to single-equation estimators, such as FM-OLS and DOLS. SUR estimators perform badly, or are even unfeasible, when the time dimension is not very large compared to the cross-section dimension. --
    Keywords: Panel cointegration,FM-OLS,FM-SUR,DOLS,DSUR
    JEL: C15 C23 C33
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:zbw:ifwedp:20121&r=ecm
  9. By: Richard H. Gerlach (The University of Sydney Business School); Cathy W.S. Chen (Feng Chia University, Taiwan); Liou-Yan Lin (Feng Chia University, Taiwan)
    Abstract: Bayesian semi-parametric estimation has proven effective for quantile estimation in general and specifically in financial Value at Risk forecasting. Expected short-fall is a competing tail risk measure, involving a conditional expectation beyond a quantile, that has recently been semi-parametrically estimated via asymmetric least squares and so-called expectiles. An asymmetric Gaussian density is proposed allowing a likelihood to be developed that leads to Bayesian semi-parametric estimation and forecasts of expectiles and expected shortfall. Further, the conditional autoregressive expectile class of model is generalised to two fully nonlinear families. Adaptive Markov chain Monte Carlo sampling schemes are employed for estimation in these families. The proposed models are clearly favoured in an empirical study forecasting eleven financial return series: clear evidence of more accurate expected shortfall forecasting, compared to a range of competing methods is found. Further, the most favoured models are those estimated by Bayesian methods.
    Keywords: CARE model; Nonlinear; Asymmetric Gaussian distribution; Expected shortfall; semi-parametric.
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:01/2012&r=ecm
  10. By: Tatiana Komarova; Denis Nekipelov (Institute for Fiscal Studies and Berkeley); Evgeny Yakovlev
    Abstract: <p>Businesses routinely rely on econometric models to analyze and predict consumer behavior. Estimation of such models may require combining a firm's internal data with external datasets to take into account sample selection, missing observations, omitted variables and errors in measurement within the existing data source. In this paper we point out that these data problems can be addressed when estimating econometric models from combined data using the data mining techniques under mild assumptions regarding the data distribution. However, data combination leads to serious threats to security of consumer data: we demonstrate that point identification of an econometric model from combined data is incompatible with restrictions on the risk of individual disclosure. Consequently, if a consumer model is point identified, the firm would (implicitly or explicitly) reveal the identity of at least some of consumers in its internal data. More importantly, we provide an argument that unless the firm places a restriction on the individual disclosure risk when combining data, even if the raw combined dataset is not shared with a third party, an adversary or a competitor can gather confidential information regarding some individuals from the estimated model.</p>
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:38/11&r=ecm
  11. By: Domenico Giannone; Michèle Lenza; Giorgio E. Primiceri
    Abstract: Vector autoregressions (VARs) are flexible time series models that can capture complex dynamic interrelationships among macroeconomic variables. However, their dense parameterization leads to unstable inference and inaccurate out-of- sample forecasts, particularly for models with many variables. A potential solution to this problem is to use informative priors, in order to shrink the richly parameterized unrestricted model towards a parsimonious naïve benchmark, and thus reduce estimation uncertainty. This paper studies the optimal choice of the informativeness of these priors, which we treat as additional parameters, in the spirit of hierarchical modeling. This approach is theoretically grounded, easy to implement, and greatly reduces the number and importance of subjective choices in the setting of the prior. Moreover, it performs very well both in terms of out-of-sample forecasting, and accuracy in the estimation of impulse response functions.
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/106648&r=ecm
  12. By: Victor Chernozhukov (Institute for Fiscal Studies and MIT); Iván Fernández-Val (Institute for Fiscal Studies and Boston University)
    Abstract: <p>Quantile regression is an increasingly important empirical tool in economics and other sciences for analyzing the impact of a set of regressors on the conditional distribution of an outcome. Extremal quantile regression, or quantile regression applied to the tails, is of interest in many economic and financial applications, such as conditional value-at-risk, production efficiency, and adjustment bands in (S,s) models. In this paper we provide feasible inference tools for extremal conditional quantile models that rely upon extreme value approximations to the distribution of self-normalized quantile regression statistics. The methods are simple to implement and can be of independent interest even in the non-regression case. We illustrate the results with two empirical examples analyzing extreme fluctuations of a stock return and extremely low percentiles of live infants' birthweights in the range between 250 and 1500 grams.</p>
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:40/11&r=ecm
  13. By: Smeekes Stephan; Urbain Jean-Pierre (METEOR)
    Abstract: In this paper we investigate the validity of the univariate autoregressive sieve bootstrap appliedto time series panels characterized by general forms of cross-sectional dependence, including butnot restricted to cointegration. Using the final equations approach we show that while it ispossible to write such a panel as a collection of infinite order autoregressive equations, theinnovations of these equations are not vector white noise. This causes the univariateautoregressive sieve bootstrap to be invalid in such panels. We illustrate this result with asmall numerical example using a simple bivariate system for which the sieve bootstrap is invalid,and show that the extent of the invalidity depends on the value of specific parameters. We alsoshow that Monte Carlo simulations in small samples can be misleading about the validity of theunivariate autoregressive sieve bootstrap. The results in this paper serve as a warning about thepractical use of the autoregressive sieve bootstrap in panels where cross-sectional dependence ofgeneral from may be present.
    Keywords: econometrics;
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:dgr:umamet:2011055&r=ecm
  14. By: Ladislav Kristoufek
    Abstract: In this paper, we present the results of Monte Carlo simulations for two popular techniques of long-range correlations detection - classical and modified rescaled range analyses. A focus is put on an effect of different distributional properties on an ability of the methods to efficiently distinguish between short and long-term memory. To do so, we analyze the behavior of the estimators for independent, short-range dependent, and long-range dependent processes with innovations from 8 different distributions. We find that apart from a combination of very high levels of kurtosis and skewness, both estimators are quite robust to distributional properties. Importantly, we show that R/S is biased upwards (yet not strongly) for short-range dependent processes, while M-R/S is strongly biased downwards for long-range dependent processes regardless of the distribution of innovations.
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1201.3511&r=ecm
  15. By: Nayoung Lee; Hyungsik Roger Moon; Martin Weidner (Institute for Fiscal Studies and UCL)
    Abstract: <p>This paper studies a simple dynamic panel linear regression model with interactive fixed effects in which the variable of interest is measured with error. To estimate the dynamic coefficient, we consider the least-squares minimum distance (LS-MD) estimation method. </p><p></p>
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:37/11&r=ecm
  16. By: Marc Hallin; Zudi Lu; Davy Paindaveine; Miroslav Siman
    Abstract: A new quantile regression concept, based on a directional version of Koenker and Bassett’s traditional single-output one, has been introduced in [Hallin, Paindaveine and ¡Siman, Annals of Statistics 2010, 635-703] for multiple-output regression problems. The polyhedral contours provided by the empirical counterpart of that concept, however, cannot adapt to nonlinear and/or heteroskedastic dependencies. This paper therefore introduces local constant and local linear versions of those contours, which both allow to asymptotically recover the conditional halfspace depth contours of the response. In the multiple-output context considered, the local linear construction actually is of a bilinear nature. Bahadur representation and asymptotic normality results are established. Illustrations are provided both on simulated and real data.
    Keywords: nonparametric regression; local bilineear regression; quantile regression; multivariate quantile; growth chart; halfspace depth
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/106956&r=ecm
  17. By: Bruno Arpino (Department of Decision Sciences and Dondena Centre for Research on Social Dynamics, Bocconi University.); Elisabetta De Cao (Dondena Centre for Research on Social Dynamics, Bocconi University.); Franco Peracchi (Tor Vergata University and EIEF)
    Abstract: Although population-based surveys are now considered the "gold standard" for estimating HIV prevalence, they are usually plagued by problems of nonignorable non- response. This paper uses the partial identification approach to assess the uncertainty caused by missing HIV status due to unit and item nonresponse. We show how to exploit the availability of panel data and the absorbing nature of HIV infection to narrow the worst-case bounds without imposing assumptions on the missing-data mechanism. Applied to longitudinal data from rural Malawi, our approach results in a substantial reduction of the width of the worst-case bounds. We also use plausible instrumental variable and monotone instrumental variable restrictions to further narrow the bounds.
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:eie:wpaper:1113&r=ecm
  18. By: Steven Kou; Tony Sit; Zhiliang Ying
    Abstract: During the last decade Levy processes with jumps have received increasing popularity for modelling market behaviour for both derviative pricing and risk management purposes. Chan et al. (2009) introduced the use of empirical likelihood methods to estimate the parameters of various diffusion processes via their characteristic functions which are readily avaiable in most cases. Return series from the market are used for estimation. In addition to the return series, there are many derivatives actively traded in the market whose prices also contain information about parameters of the underlying process. This observation motivates us, in this paper, to combine the return series and the associated derivative prices observed at the market so as to provide a more reflective estimation with respect to the market movement and achieve a gain of effciency. The usual asymptotic properties, including consistency and asymptotic normality, are established under suitable regularity conditions. Simulation and case studies are performed to demonstrate the feasibility and effectiveness of the proposed method.
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1201.2899&r=ecm
  19. By: Fernández-Avilés, G; Montero, JM; Mateu, J
    Abstract: Obtaining new and flexible classes of nonseparable spatio-temporal covariances have resulted in a key point of research in the last years within the context of spatiotemporal Geostatistics. Approach: In general, the literature has focused on the problem of full symmetry and the problem of anisotropy has been overcome. Results: By exploring mathematical properties of positive definite functions and their close connection to covariance functions we are able to develop new spatio-temporal covariance models taking into account the problem of spatial anisotropy. Conclusion/Recommendations: The resulting structures are proved to have certain interesting mathematical properties, together with a considerable applicability.
    Keywords: Spatial anisotropy; bernstein and complete monotone functions; spatio-temporal geostatistics; positive definite functions; space-time modeling; spatio-temporal data
    JEL: C4
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:35874&r=ecm
  20. By: Laurent Pauwels (The University of Sydney Business School); Andrey Vasnev (The University of Sydney Business School)
    Abstract: This paper provides a methodology for combining forecasts based on several discrete choice models. This is achieved primarily by combining one-step-ahead probability forecast associated with each model. The paper applies well-established scoring rules for qualitative response models in the context of forecast combination. Log-scores and quadratic-scores are both used to evaluate the forecasting accuracy of each model and to combine the probability forecasts. In addition to producing point forecasts, the effect of sampling variation is also assessed. This methodology is applied to forecast the US Federal Open Market Committee (FOMC) decisions in changing the federal funds target rate. Several of the economic fundamentals influencing the FOMC decisions are nonstationary over time and are modelled in a similar fashion to Hu and Phillips (2004a, JoE). The empirical results show that combining forecasted probabilities using scores mostly outperforms both equal weight combination and forecasts based on multivariate models.
    Keywords: Forecast combination, Probability forecast, Discrete choice models, Monetary policy decisions
    Date: 2011–06
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:11/2011&r=ecm
  21. By: Richard Gerlach (Faculty of Economics and Business, The University of Sydney); Qian Chen
    Abstract: A two-sided Weibull is developed to model the conditional financial return distribution, for the purpose of forecasting Value at Risk (VaR) and conditional VaR. A range of conditional return distributions are combined with four volatility specifications to forecast tail risk in four international markets, two exchange rates and one individual asset series, over a four year forecast period that includes the recent global financial crisis. The two-sided Weibull performs at least as well as other distributions for VaR forecasting, but performs most favourably for conditional Value at Risk forecasting, prior to as well as during and after the recent crisis.
    Keywords: Two-sidedWeibull, Value-at-Risk, Expected shortfall, Back-testing, global financial crisis, volatility.
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:01/2011&r=ecm
  22. By: David Stern (The Australian National University)
    Abstract: The paper focuses on establishing causation in regression analysis in observational settings. Simple static regression analysis cannot establish causality in the absence of a priori theory on possible causal mechanisms or controlled and randomized experiments. However, two regression based econometric techniques – instrumental variables and Granger causality - can be used to test for causality given some assumptions. The Granger causality technique is applied to a time series data set on energy and economic growth from Sweden spanning 150 years to determine whether increases in energy use and energy quality have driven economic growth. I show that the Granger causality technique is very sensitive to variable definition, choice of additional variables in the model, and sample periods. Better results can be obtained by using multivariate models, defining variables to better reflect their theoretical definition, and by using larger samples. The better specified models with larger samples are more likely to show that energy causes output growth but it is also possible that the relationship between energy and growth has changed over time. Energy prices have a significant causal impact on both energy use and output while there is no strong evidence that energy use causes carbon and sulfur emissions despite the obvious physical relationship.
    Keywords: causality, energy, economic growth
    JEL: C32 Q43
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:een:crwfrp:1113&r=ecm

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.