nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒12‒06
23 papers chosen by
Sune Karlsson
Orebro University

  1. Maximum Likelihood Estimation of the Panel Sample Selection Model By Hung-pin Lai; Wen-Jen Tsay
  2. Rank-Based Tests of the Cointegrating Rank in Semiparametric Error Correction Models By Marc Hallin; Ramon van den Akker; Bas Werker
  3. Multivariate Variance Targeting in the BEKK-GARCH Model By Rasmus Søndergaard Pedersen; Anders Rahbek
  4. A Flexible Sample Selection Model: A GTL-Copula Approach By Hasebe, Takuya; Vijverberg, Wim P.
  5. Valid Locally Uniform Edgeworth Expansions Under Weak Dependence and Sequences of Smooth Transformations By Stelios Arvanitis; Antonis Demos
  6. Multivariate wishart stochastic volatility and changes in regime By Gribisch, Bastian
  7. Instant Trend-Seasonal Decomposition of Time Series with Splines By Luis Francisco Rosales; Tatyana Krivobokova
  8. Multiple Fractional Response Variables with Continuous Endogenous Explanatory Variables. By Nam, Suhyeon
  9. An overview of the goodness-of-fit test problem for copulas By Jean-David Fermanian
  10. Asymptotic theory for Brownian semi-stationary processes with application to turbulence By José Manuel Corcuera; Emil Hedevang; Mikko S. Pakkanen; Mark Podolskij
  11. Econometrics on GPUs By Michael Creel; Sonik Mandal; Mohammad Zubair
  12. A unified framework for spline estimators By Katsiaryna Schwarz; Tatyana Krivobokova
  13. Nonlinear Policy Rules and the Identification and Estimation of Causal Effects in a Generalized Regression Kink Design By David Card; David Lee; Zhuan Pei; Andrea Weber
  14. Time-varying Combinations of Predictive Densities using Nonlinear Filtering By Monica Billio; Roberto Casarin; Francesco Ravazzolo; Herman K. van Dijk
  15. Nonlinear Kalman Filtering in Affine Term Structure Models By Peter Christoffersen; Christian Dorion; Kris Jacobs; Lotfi Karoui
  16. "Robust statistics: a functional approach". By Ruiz-Gazen, Anne
  17. Evaluating a Global Vector Autoregression for Forecasting By Neil R. Ericsson; Erica L. Reisman
  18. Forecasting with a noncausal VAR model By Nyberg , Henri; Saikkonen, Pentti
  19. Detecting asset price bubbles with time-series methods By Taipalus, Katja
  20. Matching vs Differencing when Estimating Treatment Effects with Panel Data: the Example of the Effect of Job Training Programs on Earnings By Chabé-Ferret, Sylvain
  21. A Note on the Impact of Portfolio Overlapping in Tests of the Fama and French Three-Factor Model By Wallmeier, Martin; Tauscher, Kathrin
  22. The Consequences of Measurement Error when Estimating the Impact of BMI on Labour Market Outcomes By Donal O'Neill; Olive sweetman
  23. Experiments, Passive Observation and Scenario Analysis: Trygve Haavelmo and the Cointegrated Vector Autoregression By Kevin Hoover; Katarina Juselius

  1. By: Hung-pin Lai (Department of Economics, National Chung Cheng University, Taiwan); Wen-Jen Tsay (Institute of Economics, Academia Sinica, Taipei, Taiwan)
    Abstract: Heckman’s (1976, 1979) sample selection model has been employed in many studies of linear or nonlinear regression applications. It is well known that ignoring the sample selectivity problem may result in inconsistency of the estimator due to the correlation between the statistical errors in the selection and main equations. In this paper, we consider the problem of estimating a panel sample selection model. Since the panel data model contains the individual effects, such as the fixed or random effect, the likelihood function is quite complicated when the sample selection is taken into account. We therefore propose to solve the estimation problem by utilizing the maximum likelihood (ML) approach together with the closed skewed normal distribution. Finally, we also conduct a Monte Carlo experiment to investigate the finite sample performance of the proposed estimator and find that our ML estimator provides reliable and quite satisfactory results.
    Keywords: Panel data, Sample selection, Maximum likelihood estimation, Closed skewed normal
    JEL: C33 C40
    Date: 2012–08
  2. By: Marc Hallin; Ramon van den Akker; Bas Werker
    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error CorrectionModel with i.i.d. elliptical innovations. The tests are asymptotically distribution-free,and their validity does not depend on the actual distribution of the innovations. Thisresult holds despite the fact that, depending on the alternatives considered, the model exhibitsa non-standard Locally Asymptotically Brownian Functional (LABF) and LocallyAsymptotically Mixed Normal (LAMN) local structure—a structure which we completelycharacterize. Our tests, which have the general form of Lagrange multiplier tests, dependon a reference density that can freely be chosen, and thus is not restricted to be Gaussianas in traditional quasi-likelihood procedures. Moreover, appropriate choices of the referencedensity are achieving the semiparametric efficiency bounds. Simulations show thatour asymptotic analysis provides an accurate approximation to finite-sample behavior.Our results are based on an extension, of independent interest, of two abstract resultson the convergence of statistical experiments and the asymptotic linearity of statistics tothe context of, possibly non-stationary, time series
    Keywords: Cointegration model; Cointegration Rank; Elliptical densities; error correction model; Lagrange Multiplier test; local asymptotic Brownian Functional; Local asymptotic Mixed Normality; Local asymptotic Normality; Multivariate Ranks; non-Gaussian Quasi-Likelihood Procedures
    JEL: C14 C32
    Date: 2012–11
  3. By: Rasmus Søndergaard Pedersen (University of Copenhagen); Anders Rahbek (University of Copenhagen and CREATES)
    Abstract: This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By defi?nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modifi?ed likelihood function, or estimating function, corresponding to these two steps. Strong consistency is established under weak moment conditions, while sixth order moment restrictions are imposed to establish asymptotic normality. Included simulations indicate that the multivariately induced higher-order moment constraints are indeed necessary.
    Keywords: Covariance targeting, Variance targeting, Multivariate GARCH, BEKK, Asymptotic theory, Time series.
    JEL: C32 C51 C58
    Date: 2012–11–14
  4. By: Hasebe, Takuya (CUNY Graduate Center); Vijverberg, Wim P. (CUNY Graduate Center)
    Abstract: In this paper, we propose a new approach to estimating sample selection models that combines Generalized Tukey Lambda (GTL) distributions with copulas. The GTL distribution is a versatile univariate distribution that permits a wide range of skewness and thick- or thin-tailed behavior in the data that it represents. Copulas help create versatile representations of bivariate distribution. The versatility arising from inserting GTL marginal distributions into copula-constructed bivariate distributions reduces the dependence of estimated parameters on distributional assumptions in applied research. A thorough Monte Carlo study illustrates that our proposed estimator performs well under normal and nonnormal settings, both with and without an instrument in the selection equation that fulfills the exclusion restriction that is often considered to be a requisite for implementation of sample selection models in empirical research. Five applications ranging from wages and health expenditures to speeding tickets and international disputes illustrate the value of the proposed GTL-copula estimator.
    Keywords: sample selection, copula, Generalized Tukey Lambda distribution
    JEL: C24 C35
    Date: 2012–11
  5. By: Stelios Arvanitis; Antonis Demos (
    Abstract: In this paper we are concerned with the issue of the existence of locally uniform Edgeworth expansions for the distributions of parameterized random vectors. Our motivation resides on the fact that this could enable subsequent polynomial asymptotic expansions of moments. These could be useful for the establishment of asymptotic properties for estimators based on these moments. We derive sufficient conditions either in the case of stochastic processes exhibiting weak dependence, or in the case of smooth transformations of such expansions.
    Keywords: Locally uniform Edgeworth expansion, formal Edgeworth distribution, weak dependence, smooth transformations, moment approximations, GMM estimators, Indirect estimators, GARCH model
    JEL: C10 C13
    Date: 2012–06–05
  6. By: Gribisch, Bastian
    Abstract: This paper generalizes the basic Wishart multivariate stochastic volatility model of Philipov and Glickman (2006) and Asai and McAleer (2009) to encompass regime switching behavior. The latent state variable is driven by a first-order Markov process. The model allows for state-dependent (co)variance and correlation levels and state-dependent volatility spillover effects. Parameter estimates are obtained using Bayesian Markov Chain Monte Carlo procedures and filtered estimates of the latent variances and covariances are generated by particle filter techniques. The model is applied to five European stock index return series. The results show that the proposed regime-switching specification substantially improves the in-sample fit and the VaR forecasting performance relative to the basic model. --
    Keywords: Multivariate stochastic volatility,Dynamic correlations,Wishart distribution,Markov switching,Markov chain Monte Carlo
    JEL: C32 C58 G17
    Date: 2012
  7. By: Luis Francisco Rosales (Georg-August-University Göttingen); Tatyana Krivobokova (Georg-August-University Göttingen)
    Abstract: We present a nonparametric method to decompose a times series into trend, seasonal and remainder components. This fully data-driven technique is based on penalized splines and makes an explicit characterization of the varying seasonality and the correlation in the remainder. The procedure takes advantage of the mixed model representation of penalized splines that allows for the simultaneous estimation of all model parameters from the corresponding likelihood. Simulation studies and three data examples illustrate the effectiveness of the approach.
    Keywords: Penalized splines; Mixed model; Varying coecient; Correlated remainder
    Date: 2012–11–20
  8. By: Nam, Suhyeon
    Abstract: Multiple fractional response variables have two features. Each response is between zero and one, and the sum of the responses is one. In this paper, I develop an estimation method not only accounting for these two features, but also allowing for endogeneity. It is a two step estimation method employing a control function approach: the first step generates a control function using a linear regression, and the second step maximizes the multinomial log likelihood function with the multinomial logit conditional mean which depends on the control function generated in the first step. Monte Carlo simulations examine the performance of the estimation method when the conditional mean in the second step is misspecified. The simulation results provide evidence that the method's average partial effects (APEs) estimates approximate well true APEs and that the method's approximations is preferable to an alternative linear method. I apply this method to the Michigan Educational Assessment Program data in order to estimate the effects of public school spending on fourth grade math test outcomes, which are categorized into one of four levels. The effects of spending on the top two levels are statistically significant while almost those on the others are not.
    Keywords: Multiple fractional responses; Endogeneity; Partial effects; Two step estimation; Control function approach; Misspecified conditional mean; Monte Carlo simulation
    JEL: I2 H75 C15 C1
    Date: 2012–10
  9. By: Jean-David Fermanian
    Abstract: We review the main "omnibus procedures" for goodness-of-fit testing for copulas: tests based on the empirical copula process, on probability integral transformations, on Kendall's dependence function, etc, and some corresponding reductions of dimension techniques. The problems of finding asymptotic distribution-free test statistics and the calculation of reliable p-values are discussed. Some particular cases, like convenient tests for time-dependent copulas, for Archimedean or extreme-value copulas, etc, are dealt with. Finally, the practical performances of the proposed approaches are briefly summarized.
    Date: 2012–11
  10. By: José Manuel Corcuera (Universitat de Barcelona); Emil Hedevang (Aarhus University); Mikko S. Pakkanen (Aarhus University and CREATES); Mark Podolskij (Heidelberg University and CREATES)
    Abstract: This paper presents some asymptotic results for statistics of Brownian semi-stationary (BSS) processes. More precisely, we consider power variations of BSS processes, which are based on high frequency (possibly higher order) differences of the BSS model. We review the limit theory discussed in [Barndorff-Nielsen, O.E., J.M. Corcuera and M. Podolskij (2011): Multipower variation for Brownian semistationary processes. Bernoulli 17(4), 1159-1194; Barndorff-Nielsen, O.E., J.M. Corcuera and M. Podolskij (2012): Limit theorems for functionals of higher order differences of Brownian semi-stationary processes. In "Prokhorov and Contemporary Probability Theory", Springer.] and present some new connections to fractional diffusion models. We apply our probabilistic results to construct a family of estimators for the smoothness parameter of the BSS process. In this context we develop estimates with gaps, which allow to obtain a valid central limit theorem for the critical region. Finally, we apply our statistical theory to turbulence data.
    Keywords: Brownian semi-stationary processes, high frequency data, limit theorems, stable convergence, turbulence
    JEL: C10 C13 C14
    Date: 2012–11–16
  11. By: Michael Creel; Sonik Mandal; Mohammad Zubair
    Abstract: A graphical processing unit (GPU) is a hardware device normally used to manipulate computer memory for the display of images. GPU computing is the practice of using a GPU device for scientific or general purpose computations that are not necessarily related to the display of images. Many problems in econometrics have a structure that allows for successful use of GPU computing. We explore two examples. The first is simple: repeated evaluation of a likelihood function at different parameter values. The second is a more complicated estimator that involves simulation and nonparametric fitting. We find speedups from 1.5 up to 55.4 times, compared to computations done on a single CPU core. These speedups can be obtained with very little expense, energy consumption, and time dedicated to system maintenance, compared to equivalent performance solutions using CPUs. Code for the examples is provided.
    Keywords: parallel computing; graphical processing unit; GPU; econometrics; simulation-based methods; Bayesian estimation.
    JEL: C13 C14 C15 C33
    Date: 2012–11–12
  12. By: Katsiaryna Schwarz (Georg-August-University Göttingen); Tatyana Krivobokova (Georg-August-University Göttingen)
    Abstract: This article develops a unified framework to study the (asymptotic) properties of (periodic) spline based estimators, that is of regression, penalized and smoothing splines. We obtain an explicit form of the Demmler-Reinsch basis of general degree in terms of exponential splines and corresponding eigenvalues by applying Fourier techniques to periodic smoothers. This allows to derive exact expressions for the equivalent kernels of all spline estimators and get insights into the local and global asymptotic behavior of these estimators.
    Keywords: B-splines; Equivalent kernels; Euler-Frobenius polynomials; Exponential splines; Demmler-Reinsch basis
    Date: 2012–11–20
  13. By: David Card; David Lee; Zhuan Pei; Andrea Weber
    Abstract: We consider nonparametric identification and estimation in a nonseparable model where a continuous regressor of interest is a known, deterministic, but kinked function of an observed assignment variable. This design arises in many institutional settings where a policy variable (such as weekly unemployment benefits) is determined by an observed but potentially endogenous assignment variable (like previous earnings). We provide new results on identification and estimation for these settings, and apply our results to obtain estimates of the elasticity of joblessness with respect to UI benefit rates. We characterize a broad class of models in which a “Regression Kink Design” (RKD, or RK Design) provides valid inferences for the treatment-on-the-treated parameter (Florens et al. (2008)) that would be identified in an ideal randomized experiment. We show that the smooth density condition that is sufficient for identification rules out extreme sorting around the kink, but is compatible with less severe forms of endogeneity. It also places testable restrictions on the distribution of predetermined covariates around the kink point. We introduce a generalization of the RKD – the “fuzzy regression kink design” – that allows for omitted variables in the assignment rule, as well as certain types of measurement errors in the observed values of the assignment variable and the policy variable. We also show how standard local polynomial regression techniques can be adapted to obtain nonparametric estimates for the sharp and fuzzy RKD. We then use a fuzzy RKD approach to study the effect of unemployment insurance benefits on the duration of joblessness in Austria, where the benefit schedule has kinks at the minimum and maximum benefit level. Our estimates suggest that the elasticity of joblessness with respect to the benefit rate is on the order of 1.5.
    JEL: C13 C14 C31
    Date: 2012–11
  14. By: Monica Billio (University of Venice, GRETA Assoc. and School for Advanced Studies in Venice); Roberto Casarin (University of Venice, GRETA Assoc. and School for Advanced Studies in Venice); Francesco Ravazzolo (Norges Bank and BI Norwegian Business School); Herman K. van Dijk (Erasmus University Rotterdam, VU University Amsterdam)
    Abstract: We propose a Bayesian combination approach for multivariate predictive densities which relies upon a distributional state space representation of the combination weights. Several specifications of multivariate time-varying weights are introduced with a particular focus on weight dynamics driven by the past performance of the predictive densities and the use of learning mechanisms. In the proposed approach the model set can be incomplete, meaning that all models can be individually misspecified. A Sequential Monte Carlo method is proposed to approximate the filtering and predictive densities. The combination approach is assessed using statistical and utility-based performance measures for evaluating density forecasts. Simulation results indicate that, for a set of linear autoregressive models, the combination strategy is successful in selecting, with probability close to one, the true model when the model set is complete and it is able to detect parameter instability when the model set includes the true model that has generated subsamples of data. For the macro series we find that incompleteness of the models is relatively large in the 70's, the beginning of the 80's and during the recent financial crisis, and lower during the Great Moderation. With respect to returns of the S&P 500 series, we find that an investment strategy using a combination of predictions from professional forecasters and from a white noise model puts more weight on the white noise model in the beginning of the 90's and switches to giving more weight to the professional forecasts over time.
    Keywords: Density Forecast Combination; Survey Forecast; Bayesian Filtering; Sequential Monte Carlo
    JEL: C11 C15 C53 E37
    Date: 2012–11–07
  15. By: Peter Christoffersen (University of Toronto - Rotman School of Management and CREATES); Christian Dorion (HEC Montreal); Kris Jacobs (University of Houston and Tilburg University); Lotfi Karoui (Goldman, Sachs & Co.)
    Abstract: When the relationship between security prices and state variables in dynamic term structure models is nonlinear, existing studies usually linearize this relationship because nonlinear fi?ltering is computationally demanding. We conduct an extensive investigation of this linearization and analyze the potential of the unscented Kalman ?filter to properly capture nonlinearities. To illustrate the advantages of the unscented Kalman ?filter, we analyze the cross section of swap rates, which are relatively simple non-linear instruments, and cap prices, which are highly nonlinear in the states. An extensive Monte Carlo experiment demonstrates that the unscented Kalman fi?lter is much more accurate than its extended counterpart in fi?ltering the states and forecasting swap rates and caps. Our fi?ndings suggest that the unscented Kalman fi?lter may prove to be a good approach for a number of other problems in fi?xed income pricing with nonlinear relationships between the state vector and the observations, such as the estimation of term structure models using coupon bonds and the estimation of quadratic term structure models.
    Keywords: Kalman filtering, nonlinearity, term structure models, swaps, caps.
    JEL: G12
    Date: 2012–05–14
  16. By: Ruiz-Gazen, Anne
    Date: 2012
  17. By: Neil R. Ericsson (Board of Governors of the Federal Reserve System); Erica L. Reisman (Board of Governors of the Federal Reserve System)
    Abstract: Global vector autoregressions (GVARs) have several attractive features: multiple potential channels for the international transmission of macroeconomic and financial shocks, a standardized economically appealing choice of variables for each country or region examined, systematic treatment of long-run properties through cointegration analysis, and flexible dynamic specification through vector error correction modeling. Pesaran, Schuermann, and Smith (2009) generate and evaluate forecasts from a paradigm GVAR with 26 countries, based on Dées, di Mauro, Pesaran, and Smith (2007). The current paper empirically assesses the GVAR in Dées, di Mauro, Pesaran, and Smith (2007) with impulse indicator saturation (IIS)—a new generic procedure for evaluating parameter constancy, which is a central element in model-based forecasting. The empirical results indicate substantial room for an improved, more robust specification of that GVAR. Some tests are suggestive of how to achieve such improvements.
    Keywords: cointegration, error correction, forecasting, GVAR, impulse indicator saturation, model design, model evaluation, model selection, parameter constancy, VAR
    JEL: C32 F41
    Date: 2012–11
  18. By: Nyberg , Henri (Department of Political and Economic Studies, and HECER, University of Helsinki); Saikkonen, Pentti (Department of Mathematics and Statistics, and HECER, University of Helsinki, and the Monetary Policy and Research Department of the Bank of Finland)
    Abstract: We propose simulation-based forecasting methods for the noncausal vector autoregressive model proposed by Lanne and Saikkonen (2012). Simulation or numerical methods are required because the prediction problem is generally nonlinear and, therefore, its analytical solution is not available. It turns out that different special cases of the model call for different simulation procedures. Simulation experiments demonstrate that gains in forecasting accuracy are achieved by using the correct noncausal VAR model instead of its conventional causal counterpart. In an empirical application, a noncausal VAR model comprised of U.S. inflation and marginal cost turns out superior to the best-fitting conventional causal VAR model in forecasting inflation.
    Keywords: noncausal vector autoregression; forecasting; simulation; importance sampling; inflation
    JEL: C32 C53 E31
    Date: 2012–11–09
  19. By: Taipalus, Katja (Bank of Finland Research)
    Abstract: To promote the financial stability, there is a need for an early warning system to signal the formation of asset price misalignments. This research provides two novel methods to accomplish this task. Results in this research shows that the conventional unit root tests in modified forms can be used to construct early warning indicators for bubbles in financial markets. More precisely, the conventional augmented Dickey-Fuller unit root test is shown to provide a basis for two novel bubble indicators. These new indicators are tested via MC simulations to analyze their ability to signal emerging unit roots in time series and to compare their power with standard stability and unit root tests. Simulation results concerning these two new stability tests are promising: they seem to be more robust and to have more power in the presence of changing persistence than the standard stability and unit root tests. When these new tests are applied to real US stock market data starting from 1871, they are able to signal most of the consensus bubbles, defined as stock market booms for example by the IMF, and they also flash warning signals far ahead of a crash. Also encouraging are the results with these methods in practical applications using equity prices in the UK, Finland and China as the methods seem to be able to signal most of the consensus bubbles from the data. Finally, these early warning indicators are applied to data for several housing markets. In most of the cases the indicators seem to work relatively well, indicating bubbles before the periods which, according to the consensus literature, are seen as periods of sizeable upward or downward movements. The scope of application of these early warning indicators could be wide. They could be used eg to help determine the right timing for the start of a monetary tightening cycle or for an increase in countercyclical capital buffers.
    Keywords: asset prices; financial crises; bubbles; indicator; unit-root
    JEL: C15 G01 G12
    Date: 2012–11–15
  20. By: Chabé-Ferret, Sylvain
    Abstract: This paper compares matching and Difference-In-Difference matching (DID) when estimating the effect of a program on a dynamic outcome. I detail the sources of bias of each estimator in a model of entry into a Job Training Program (JTP) and earnings dynamics that I use as a working example. I show that there are plausible settings in which DID is consistent while matching on past outcomes is not. Unfortunately, the consistency of both estimators relies on conditions that are at odds with properties of earnings dynamics. Using calibration and Monte-Carlo simulations, I show that deviations from the most favorable conditions severely bias both estimators. The behavior of matching is nevertheless less erratic: its bias generally decreases when controlling for more past outcomes and it generally provides a lower bound on the true treatment effect. I finally point to previously unnoticed empirical results that confirm that DID does well, and generally better than matching on past outcomes, at replicating the results of an experimental benchmark.
    Keywords: Matching - Difference in Difference - Evaluation of Job training Programs.
    JEL: C21 C23
    Date: 2012–10
  21. By: Wallmeier, Martin; Tauscher, Kathrin
    Abstract: In the three-factor model of Fama and French (1993), portfolio returns are explained by the factors Small Minus Big (SMB) and High Minus Low (HML) which capture returns related to firm capitalization (size) and the book-to-market ratio (B/M). In the standard approach of the model, both the test portfolios and the factor portfolios SMB and HML are formed on the basis of size and B/M. This gives rise to a potential overlapping bias in the time-series regressions. Based on a resampling method and the split sample approach already proposed by Fama and French (1993), we provide an in-depth analysis of the effect of overlapping for a broad sample of European stocks. We find that the overlapping bias is non-negligible, contrary to what seems to be general opinion. As a consequence, the standard approach of applying the three-factor model tends to overestimate the ability of the model to explain the cross-section of stock returns.
    Keywords: Asset pricing; three-factor model; portfolio overlapping; size effect; value premium
    JEL: G12 G14
    Date: 2012–11–16
  22. By: Donal O'Neill (Department of Economics Finance and Accounting, National University of Ireland, Maynooth); Olive sweetman (Department of Economics Finance and Accounting, National University of Ireland, Maynooth)
    Abstract: This paper uses data on both self-reported and true measures of individual Body Mass Index (BMI) to examine the nature of measurement error in self-reported BMI and to look at the consequences of using self-reported measures when estimating the effect of BMI on economic outcomes. In keeping with previous studies we find that self-reported BMI is subject to significant measurement error and this error is negatively correlated with the true measure of BMI. In our analysis this non-classical measurement error causes the traditional approach to overestimate the relationship between BMI and both income and education. Furthermore we show that popular alternatives estimators that have been adopted to address problems of measurement error in BMI, such as the conditional expectation approach and the instrumental variables approach, also exhibit significant biases.
    Keywords: Obesity, Non-Classical Measurement Error, Auxiliary Data, Instrumental Variables
    JEL: C13 C26 I14
    Date: 2012
  23. By: Kevin Hoover (Department of Economics and Department of Philosophy, Duke University); Katarina Juselius (Department of Economics, University of Copenhagen)
    Abstract: The paper provides a careful, analytical account of Trygve Haavelmo's unsystematic, but important, use of the analogy between controlled experiments common in the natural sciences and econometric techniques. The experimental analogy forms the linchpin of the methodology for passive observation that he develops in his famous monograph, The Probability Approach in Econometrics (1944). We show how, once the details of the analogy are systematically understood, the experimental analogy can be used to shed light on theory-consistent cointegrated vector autoregression (CVAR) scenario analysis. CVAR scenario analysis can be seen as a clear example of Haavelmo's 'experimental' approach; and, in turn, it can be shown to extend and develop Haavelmo's methodology and to address issues that Haavelmo regarded as unresolved.
    Keywords: Trygve Haavelmo, experiments, passive observation, CVAR, scenario analysis, probability approach, econometrics
    JEL: C30 C31 B41 B31 C50
    Date: 2012–11–05

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.