nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒07‒10
23 papers chosen by
Sune Karlsson
Orebro University

  1. Variable Selection, Estimation and Inference for Multi-period Forecasting Problems By M. Hashem Pesaran; Andreas Pick; Allan Timmermann
  2. A Nonlinear IV Likelihood-Based Rank Test for Multivariate Time Series and Long Panels By J. Isaac Miller
  3. Latent Variables and Propensity Score Matching By Maciej Jakubowski
  4. The conditional autoregressive wishart model for multivariate stock market volatility By Golosnoy, Vasyl; Gribisch, Bastian; Liesenfeld, Roman
  5. Testing for covariate balance using quantile regression and resampling methods By Martin Huber
  6. Bartlett-type Correction of Distance Metric Test By Wanling Huang; Artem Prokhorov
  7. Evaluating alternative methods for testing asset pricing models with historical data By Rubio, Gonzalo; Lozano, Martin
  8. Dual P-Values, Evidential Tension and Balanced Tests By D.S. Poskitt; Arivalzahan Sengarapillai
  9. Identification of average treatment effects in social experiments under different forms of attrition By Martin Huber
  10. Inference of Signs of Interaction Effects in Simultaneous Games with Incomplete Information, Second Version By Aureo de Paula; Xun Tang
  11. Identifying VARs through Heterogeneity: An Application to Bank Runs By De Graeve, Ferre; Karas, Alexei
  12. Testing for rational bubbles in a co-explosive vector autoregression By Tom Engsted; Bent Nielsen
  13. The econometric modeling of social Preferences By Andrea Conte; Peter G. Moffatt
  14. Peer effects and measurement error: the impact of sampling variation in school survey data By John Micklewright; Sylke V. Schnepf; Pedro N. Silva
  15. Bounds on functionals of the distribution treatment effects By Firpo, Sergio; Ridder, Geert
  16. Bubbles or Volatility: A Markov-Switching Unit Root Test with Regime-Varying Error Variance By Shu-Ping Shi
  17. Estimating correlation and covariance matrices by weighting of market similarity By Michael C. M\"unnix; Rudi Sch\"afer; Oliver Grothe
  18. Combining Matching and Nonparametric IV Estimation: Theory and an Application to the Evaluation of Active Labour Market Policies By Michael Lechner; Markus Froelich
  19. The Credibility Revolution in Empirical Economics: How Better Research Design is taking the Con out of Econometrics By Joshua D. Angrist; Jörn-Steffen Pischke
  20. A Dynamical Model for Forecasting Operational Losses By Marco Bardoscia; Roberto Bellotti
  21. Testing for Contagion: a Time-Scale Decomposition By Andrea Cipollini; Iolanda Lo Cascio
  22. Calling Recessions in Real Time By James D. Hamilton
  23. Measuring Industry Relatedness and Corporate Coherence By Giulio Bottazzi; Federico Tamagni

  1. By: M. Hashem Pesaran; Andreas Pick; Allan Timmermann
    Abstract: This paper conducts a broad-based comparison of iterated and direct multi-period forecasting approaches applied to both univariate and multivariate models in the form of parsimonious factor-augmented vector autoregressions. To account for serial correlation in the residuals of the multi-period direct forecasting models we propose a new SUREbased estimation method and modified Akaike information criteria for model selection. Empirical analysis of the 170 variables studied by Marcellino, Stock and Watson (2006) shows that information in factors helps improve forecasting performance for most types of economic variables although it can also lead to larger biases. It also shows that finitesample modifications to the Akaike information criterion can modestly improve the performance of the direct multi-period forecasts.
    Keywords: Multi-period forecasts; direct and iterated methods; factor augmented VARs
    JEL: C22 C32 C52 C53
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:dnb:dnbwpp:250&r=ecm
  2. By: J. Isaac Miller (Department of Economics, University of Missouri-Columbia)
    Abstract: A test for the rank of a vector error correction model (VECM) or panel VECM based on the well-known trace test is proposed. The proposed test employs instrumental variables (IV's) generated by a class of nonlinear functions of the estimated stochastic trends of the VECM under the null. The test improves the standard trace test by replacing the non-standard critical values with chi-squared critical values. Extending the result to the panel VECM case, the test is robust to cross-sectional correlation of the disturbances. With this test, I extend earlier research using nonlinear IV's for unit root testing. However, the optimal instrument in the univariate case is not admissable in the more general multivariate case. The chi-squared result suggests that IV tests may be used to replace limits of other standard tests with integrated time series that are given by nonstandard stochastic integrals, even without a panel with which to pool tests.
    Keywords: VECM, panel VECM, cointegrating rank, trace test, nonlinear instruments
    JEL: C12 C32 C33
    Date: 2010–01–30
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:1001&r=ecm
  3. By: Maciej Jakubowski (Faculty of Economic Sciences, University of Warsaw)
    Abstract: This paper examines how including latent variables can benefit propensity score matching. A researcher can estimate, based on theoretical presumptions, the latent variable from the observed manifest variables and can use this estimate in propensity score matching. This paper demonstrates the benefits of such an approach and compares it with a method more common in econometrics, where the manifest variables are directly used in matching. We intuit that estimating the propensity score on the manifest variables introduces a measurement error that can be limited when estimating the propensity score on the estimated latent variable. We use Monte Carlo simulations to test how various matching methods behave under distinct circumstances found in practice. Also, we apply this approach to real data. Using the estimated latent variable in the propensity score matching increases the efficiency of treatment effect estimators. The benefits are larger for small samples, for non-linear processes, and for a large number of the manifest variables available, especially if they are highly correlated with the latent variable.
    Keywords: factor analysis, latent variables, propensity score matching
    JEL: C14 C15 C16 C31 C52
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2010-06&r=ecm
  4. By: Golosnoy, Vasyl; Gribisch, Bastian; Liesenfeld, Roman
    Abstract: We propose a Conditional Autoregressive Wishart (CAW) model for the analysis of realized covariance matrices of asset returns. Our model assumes a generalized linear autoregressive moving average structure for the scale matrix of the Wishart distribution allowing to accommodate for complex dynamic interdependence between the variances and covariances of assets. In addition, it accounts for symmetry and positive definiteness of covariance matrices without imposing parametric restrictions, and can easily be estimated by Maximum Likelihood. We also propose extensions of the CAW model obtained by including a Mixed Data Sampling (MIDAS) component and Heterogeneous Autoregressive (HAR) dynamics for long-run fluctuations. The CAW models are applied to time series of daily realized variances and covariances for five New York Stock Exchange (NYSE) stocks. --
    Keywords: Component volatility models,Covariance matrix,Mixed data sampling,Observation-driven models,Realized volatility
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:201007&r=ecm
  5. By: Martin Huber
    Abstract: Consistency of propensity score matching estimators hinges on the propensity score's ability to balance the distributions of covariates in the pools of treated and nontreated units. Conventional balance tests merely check for differences in covariates' means, but cannot account for differences in higher moments. Specification tests constitute an alternative, but might reject misspecified, but yet balancing propensity score models. This paper proposes balance tests based on (i) quantile regression to check for differences in the distributions of continuous covariates and (ii) resampling methods to estimate the distributions of the proposed Kolmogorov-Smirnov and Cramer-von-Mises-Smirnov test statistics. Simulations suggest that the tests capture imbalances related to higher moments when conventional balance tests fail to do so and correctly keep misspecified, but balancing propensity scores when specification tests reject the null.
    Keywords: Balancing property, balance test, propensity score matching
    JEL: C12 C15 C21
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:usg:dp2010:2010-18&r=ecm
  6. By: Wanling Huang (Concordia University); Artem Prokhorov (Concordia University and CIREQ)
    Abstract: We derive a corrected distance metric (DM) test of general restrictions. The correction factor depends on the value of the uncorrected statistic and the new statistic is Bartlett-type. In the setting of covariance structure models, we show using simulations that the quality of the new approximation is good and often remarkably good. Especially at around the 95th percentile, the distribution of the corrected test statistic is strikingly close to the relevant asymptotic distribution. This is true for various sample sizes, distributions, and degrees of freedom of the model. As a by-product we provide an intuition for the well-known observation in labor economic applications that using longer panels results in a reversal of the original inference.
    Keywords: Distance Metric, GMM, Asymptotic expansion, Bartlett-type correction
    JEL: C12
    Date: 2010–06–29
    URL: http://d.repec.org/n?u=RePEc:crd:wpaper:10003&r=ecm
  7. By: Rubio, Gonzalo; Lozano, Martin
    Abstract: We follow the correct Jagannathan and Wang (2002) framework for comparing the estimates and specification tests of the classical Beta and Stochastic Discount Factor/Generalized Method of Moments (SDF/GMM) methods. We extend previous studies by considering not only single but also multifactor models, and by taking into account some of the prescriptions for improving empirical tests suggested by Lewellen, Nagel and Shanken (2009). Our results reveal that SDF/GMM first-stage estimators lead to lower pricing errors than OLS, while SDF/GMM second-stage estimators display higher pricing errors than the classical Beta GLS method. While Jagannathan and Wang (2002), and Cochrane (2005) conclude that there are no differences when estimating and testing by the Beta and SDF/GMM methods for the CAPM, we show that their conclusion can not be extensible for multifactor models. Moreover, the Beta method (OLS and GLS) seem to dominate the SDF/GMM (first and second-stage) procedure in terms of estimators’ properties. These results are consistent across benchmark portfolios and sample periods.
    Keywords: Beta Pricing Models; Stochastic Discount Factor; Pricing Errors; Evaluation of Factor Models.
    JEL: C51 G12 C52
    Date: 2009–09–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:23613&r=ecm
  8. By: D.S. Poskitt; Arivalzahan Sengarapillai
    Abstract: In the classical approach to statistical hypothesis testing the role of the null hypothesis H0 and the alternative H1 is very asymmetric. Power, calculated from the distribution of the test statistic under H1, is treated as a theoretical construct that can be used to guide the choice of an appropriate test statistic or sample size, but power calculations do not explicitly enter the testing process in practice. In a significance test a decision to accept or reject H0 is driven solely by an examination of the strength of evidence against H0, summarized in the P-value calculated from the distribution of the test statistic under H0. A small P-value is taken to represent strong evidence against H0, but it need not necessarily indicate strong evidence in favour of H1. More recently, Moerkerke et al. (2006) have suggested that the special status of H0 is often unwarranted or inappropriate, and argue that evidence against H1 can be equally meaningful. They propose a balanced treatment of both H0 and H1 in which the classical P-value is supplemented by the P-value derived under H1. The alternative P-value is the dual of the null P-value and summarizes the evidence against a target alternative. Here we review how the dual P-values are used to assess the evidential tension between H0 and H1, and use decision theoretic arguments to explore a balanced hypothesis testing technique that exploits this evidential tension. The operational characteristics of balanced hypothesis tests is outlined and their relationship to conventional notions of optimal tests is laid bare. The use of balanced hypothesis tests as a conceptual tool is illustrated via model selection in linear regression and their practical implementation is demonstrated by application to the detection of cancer-specific protein markers in mass spectroscopy.
    Keywords: Balanced test, P-value, dual P-values, evidential tension, null hypothesis, alternative hypothesis, operating characteristics, false detection rate
    JEL: C12 C44 C52
    Date: 2010–06–23
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2010-15&r=ecm
  9. By: Martin Huber
    Abstract: As any empirical method used for causal analysis, social experiments are prone to attrition which may flaw the validity of the results. This paper considers the problem of partially missing outcomes in experiments. Firstly, it systematically reveals under which forms of attrition - in terms of its relation to observable and/or unobservable factors - experiments do (not) yield causal parameters. Secondly, it shows how the various forms of attrition can be controlled for by different methods of inverse probability weighting (IPW) that are tailored to the specific missing data problem at hand. In particular, it discusses IPW methods that incorporate instrumental variables when attrition is related to unobservables, which has been widely ignored in the experimental literature.
    Keywords: experiments, attrition, inverse probability weighting
    JEL: C21 C93
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:usg:dp2010:2010-22&r=ecm
  10. By: Aureo de Paula (Department of Economics, University of Pennsylvania); Xun Tang (Department of Economics, University of Pennsylvania)
    Abstract: This paper studies the inference of interaction effects, i.e., the impacts of players' actions on each other's payoffs, in discrete simultaneous games with incomplete information. We propose an easily implementable test for the signs of state-dependent interaction effects that does not require parametric specifications of players' payoffs, the distributions of their private signals or the equilibrium selection mechanism. The test relies on the commonly invoked assumption that players' private signals are independent conditional on observed states. The procedure is valid in the presence of multiple equilibria, and, as a by-product of our approach, we propose a formal test for multiple equilibria in the data-generating process. We provide Monte Carlo evidence of the test's good performance infinite samples. We also implement the test to infer the direction of interaction effects in couples' joint retirement decisions using data from the Health and Retirement Study.
    Keywords: identification, inference, multiple equilibria, incomplete information games
    JEL: C01 C72
    Date: 2010–04–08
    URL: http://d.repec.org/n?u=RePEc:pen:papers:10-021&r=ecm
  11. By: De Graeve, Ferre (Research Department, Central Bank of Sweden); Karas, Alexei (Roosevelt Academy)
    Abstract: We propose to incorporate cross-sectional heterogeneity into structural VARs. Heterogeneity provides an additional dimension along which one can identify structural shocks and perform hypothesis tests. We provide an application to bank runs, based on microeconomic deposit market data. We impose identification restrictions both in the cross-section (across insured and non-insured banks) and across variables (as in macro SVARs). We thus (i) identify bank runs, (ii) quantify the contribution of competing theories, and, (iii) evaluate policies such as deposit insurance. The application suggests substantial promise for the approach and has strong policy implications.
    Keywords: Identification; SVAR; panel-VAR; Heterogeneity; Bank run
    JEL: C30 E50 G21
    Date: 2010–07–01
    URL: http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0244&r=ecm
  12. By: Tom Engsted (Aarhus University and CREATES); Bent Nielsen (Nuffield College, Oxford UK)
    Abstract: We derive the parameter restrictions that a standard equity market model implies for a bivariate vector autoregression for stock prices and dividends, and we show how to test these restrictions using likelihood ratio tests. The restrictions, which imply that stock returns are unpredictable, are derived both for a model without bubbles and for a model with a rational bubble. In both cases we show how the restrictions can be tested through standard chi-squared inference. The analysis for the no-bubble case is done within the traditional Johansen model for I(1) variables, while the bubble model is analysed using a co-explosive framework. The methodology is illustrated using US stock prices and dividends for the period 1872-2000.
    Keywords: Rational bubbles, Explosiveness and co-explosiveness, Cointegration, Vector autoregression, Likelihood ratio tests
    JEL: C12 C32 G12
    Date: 2010–06–25
    URL: http://d.repec.org/n?u=RePEc:aah:create:2010-25&r=ecm
  13. By: Andrea Conte (Strategic Interaction Group, Max-Planck-Institut für Okonomik, Jena, Germany); Peter G. Moffatt (School of Economics, University of East Anglia, Norwich, UK)
    Abstract: Experimental data on social preferences present a number of features that need to be incorporated in econometric modelling. We explore a variety of econometric modelling approaches to the analysis of such data. The approaches under consideration are: the random utility approach (in which it is assumed that each possible action yields a utility with a deterministic and a stochastic component, and that the individual selects the action yielding the highest utility); the random behavioural approach (which assumes that the individual computes the maximum of a deterministic utility function, and that computational error causes their observed behaviour to depart stochastically from this optimum); and the random preference approach (in which all variation in behaviour is attributed to stochastic variation in the parameters of the deterministic component of utility). These approaches are applied in various ways to an experiment on fairness conducted by Cappelen et al. (2007). At least two of the models that we estimate succeed in capturing the key features of the data set.
    Keywords: Econometric modelling and estimation, model evaluation, individual behaviour, fairness
    JEL: C51 C52 C91 D63
    Date: 2010–06–29
    URL: http://d.repec.org/n?u=RePEc:jrp:jrpwrp:2010-042&r=ecm
  14. By: John Micklewright (Depatment of Quantitative Social Science - Institute of Education, University of London.); Sylke V. Schnepf (School of Social Sciences, University of Southampton, UK.); Pedro N. Silva (Instituto Brasileiro de Geografia e Estatistica and Southampton Statistical Sciences Research Institute, University of Southampton.)
    Abstract: Investigation of peer effects on achievement with sample survey data on schools may mean that only a random sample of peers is observed for each individual. This generates classical measurement error in peer variables, resulting in the estimated peer group effects in a regression model being biased towards zero under OLS model fitting. We investigate the problem using survey data for England from the Programme for International Student Assessment (PISA) linked to administrative microdata recording information for each PISA sample member's entire year cohort. We calculate a peer group measure based on these complete data and compare its use with a variable based on peers in just the PISA sample. The estimated attenuation bias in peer effect estimates based on the PISA data alone is substantial.
    Keywords: peer effects, measurement error, school surveys, sampling variation
    JEL: C21 C81 I21
    Date: 2010–06–30
    URL: http://d.repec.org/n?u=RePEc:qss:dqsswp:1013&r=ecm
  15. By: Firpo, Sergio; Ridder, Geert
    Abstract: Bounds on the distribution function of the sum of two random variableswith known marginal distributions obtained by Makarov (1981) canbe used to bound the cumulative distribution function (c.d.f.) of individualtreatment effects. Identification of the distribution of individualtreatment effects is important for policy purposes if we are interested infunctionals of that distribution, such as the proportion of individuals whogain from the treatment and the expected gain from the treatment forthese individuals. Makarov bounds on the c.d.f. of the individual treatmenteffect distribution are pointwise sharp, i.e. they cannot be improvedin any single point of the distribution. We show that the Makarov boundsare not uniformly sharp. Specifically, we show that the Makarov boundson the region that contains the c.d.f. of the treatment effect distributionin two (or more) points can be improved, and we derive the smallest setfor the c.d.f. of the treatment effect distribution in two (or more) points.An implication is that the Makarov bounds on a functional of the c.d.f.of the individual treatment effect distribution are not best possible.
    Date: 2010–06–01
    URL: http://d.repec.org/n?u=RePEc:fgv:eesptd:201&r=ecm
  16. By: Shu-Ping Shi
    Abstract: We demonstrate that the constant variance assumption in the Markov-switching Augmented Dickey-Fuller (ADF) test proposed by Hall, Psaradakis and Sola (1999) may result in the misjudgement of bubbles. Upon relaxing this assumption to allow for regime-varying error variances in the Markov-switching ADF test (referred to as the MSADF-RV test), we revisit the integration properties of the money base, consumer price and exchange rate in Argentina from January 1983 to November 1989. Based on the MSADF-RV test, we observe the occurrence of volatility switches in the exchange rate and the consumer price instead of observing bubbles in these two series as in Hall, Psaradakis and Sola (1999)
    JEL: C22
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:acb:cbeeco:2010-524&r=ecm
  17. By: Michael C. M\"unnix; Rudi Sch\"afer; Oliver Grothe
    Abstract: We discuss a weighted estimation of correlation and covariance matrices from historical financial data. To this end, we introduce a weighting scheme that accounts for similarity of previous market conditions to the present one. The resulting estimators are less biased and show lower variance than either unweighted or exponentially weighted estimators. The weighting scheme is based on a similarity measure which compares the current correlation structure of the market to the structures at past times. Similarity is then measured by the matrix 2-norm of the difference of probe correlation matrices estimated for two different times. The method is validated in a simulation study and tested empirically in the context of mean-variance portfolio optimization. In the latter case we find an enhanced realized portfolio return as well as a reduced portfolio volatility compared to alternative approaches based on different strategies and estimators.
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1006.5847&r=ecm
  18. By: Michael Lechner; Markus Froelich
    Abstract: In this paper, we show how instrumental variable and matching estimators can be combined in order to identify a broader array of treatment effects. Instrumental variable estimators are known to estimate effects only for the compliers, which often represent only a small subset of the entire population. By combining IV with matching, we can estimate also the treatment effects for the always- and never-takers. In our application to the active labour market programmes in Switzerland, we find large positive employment effects for at least 8 years after treatment for the compliers. On the other hand, the effects for the always- and never-participants are small. In addition, when examining the potential outcomes separately, we find that the compliers have the worst employment outcomes without treatment. Hence, the assignment policy of the caseworkers was inefficient in that the always-participants were neither those with the highest treatment effect nor those with the largest need for assistance.
    Keywords: Local average treatment effect, conditional local IV, matching estimation, heterogeneous treatment effects, active labour market policy, state borders, geographic variation.
    JEL: C14 C2 J68
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:usg:dp2010:2010-21&r=ecm
  19. By: Joshua D. Angrist; Jörn-Steffen Pischke
    Abstract: This essay reviews progress in empirical economics since Leamer's (1983) critique. Leamerhighlighted the benefits of sensitivity analysis, a procedure in which researchers show howtheir results change with changes in specification or functional form. Sensitivity analysis hashad a salutary but not a revolutionary effect on econometric practice. As we see it, thecredibility revolution in empirical work can be traced to the rise of a design-based approachthat emphasizes the identification of causal effects. Design-based studies typically featureeither real or natural experiments and are distinguished by their prima facie credibility and bythe attention investigators devote to making the case for a causal interpretation of the findingstheir designs generate. Design-based studies are most often found in the microeconomicfields of Development, Education, Environment, Labor, Health, and Public Finance, but arestill rare in Industrial Organization and Macroeconomics. We explain why IO and Macrowould do well to embrace a design-based approach. Finally, we respond to the charge that thedesign-based revolution has overreached.
    Keywords: research design, natural experiment, quasi-experiment, structural models
    JEL: C01
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:cep:cepdps:dp0976&r=ecm
  20. By: Marco Bardoscia; Roberto Bellotti
    Abstract: A novel dynamical model for the study of operational risk in banks is proposed. The equation of motion takes into account the interactions among different bank's processes, the spontaneous generation of losses via a noise term and the efforts made by the banks to avoid their occurrence. A scheme for the estimation of some parameters of the model is illustrated, so that it can be tailored on the internal organizational structure of a specific bank. We focus on the case in which there are no causal loops in the matrix of couplings and exploit the exact solution to estimate also the parameters of the noise. The scheme for the estimation of the parameters is proved to be consistent and the model is shown to exhibit a remarkable capability in forecasting future cumulative losses.
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1007.0026&r=ecm
  21. By: Andrea Cipollini; Iolanda Lo Cascio
    Abstract: The aim of the paper is to test for ¯nancial contagion by estimating a simultaneous equation model subject to structural breaks. For this purpose, we use the Maximum Overlapping Discrete Wavelet Transform, MODWT, to decompose the covariance matrix of four asset returns on a scale by scale basis. This decomposition will enable us to identify the structural form model and to test for spillover e®ects between country speci¯c shocks during a crisis period. We distinguish between the case of the structural form model with a single dummy and the one with multiple dummies capturing shifts in the co-movement of asset returns occurring during periods of ¯nancial turmoil. The empirical results for four East Asian emerging stock markets show that, once we account for interdependence through an (unobservable) common factor, there is hardly any evidence of contagion during the 1997-1998 financial turbulence.
    Keywords: wavelets; simultaneous equations model; financial contagion
    JEL: C30 C51 G15
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:mod:recent:047&r=ecm
  22. By: James D. Hamilton
    Abstract: This paper surveys efforts to automate the dating of business cycle turning points. Doing this on a real time, out-of-sample basis is a bigger challenge than many academics might presume due to factors such as data revisions and changes in economic relationships over time. The paper stresses the value of both simulated real-time analysis-- looking at what the inference of a proposed model would have been using data as they were actually released at the time-- and actual real-time analysis, in which a researcher stakes his or her reputation on publicly using the model to generate out-of-sample, real-time predictions. The immediate publication capabilities of the internet make the latter a realistic option for researchers today, and many are taking advantage of it. The paper reviews a number of approaches to dating business cycle turning points and emphasizes the fundamental trade-off between parsimony-- trying to keep the model as simple and robust as possible-- and making full use of available information. Different approaches have different advantages, and the paper concludes that there may be gains from combining the best features of several different approaches.
    JEL: E32
    Date: 2010–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:16162&r=ecm
  23. By: Giulio Bottazzi; Federico Tamagni
    Abstract: Since the seminal work of Teece et al. (1994) firm diversification has been found to be a non-random process. The hidden deterministic nature of the diversification patterns is usually detected comparing expected (under a null hypothesys) and actual values of some statistics. Nevertheless the standard approach presents two big drawbacks, leaving unanswered several issues. First, using the observed value of a statistics provides noisy and nonhomogeneous estimates and second, the expected values are computed in a specific and privileged null hypothesis that implies spurious random effects. We show that using Monte Carlo p-scores as measure of relatedness provides cleaner and homogeneous estimates. Using the NBER database on corporate patents we investigate the effect of assuming different null hypotheses, from the less unconstrained to the fully constrained, revealing that new features in firm diversification patterns can be catched if random artifacts are ruled out.
    Keywords: corporate coherence; relatedness; null model analysis; patent data
    JEL: C1 D2 L2
    Date: 2010–07–01
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2010/10&r=ecm

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.