nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒01‒12
eleven papers chosen by
Sune Karlsson
Orebro University

  1. IV-Based Cointegration Testing in Dependent Panels with Time-Varying Variance By Hanck, Christoph; Demetrescu, Matei; Tarcolea, Adina
  2. Estimating dynamic causal effects with unobserved confounders: a latent class version of the inverse probability weighted estimator By Bartolucci, Francesco; Grilli, Leonardo; Pieroni, Luca
  3. On the estimation of marginal cost By Delis, Manthos D; Iosifidi, Maria; Tsionas, Efthymios
  4. A Mixed Integer Linear Programming Approach to Markov Chain Bootstrapping By Roy Cerqueti; Paolo Falbo; Cristian Pelizzari; Federica Ricca; Andrea Scozzari
  5. Programming identification criteria in simultaneous equation models By Halkos, George; Tsilika, Kyriaki
  6. Constructing weekly returns based on daily stock market data: A puzzle for empirical research? By Baumöhl, Eduard; Lyócsa, Štefan
  7. Wanna Get Away? RD Identification Away from the Cutoff By Joshua Angrist; Miikka Rokkanen
  8. Fast nonparametric classification based on data depth By Lange, Tatjana; Mosler, Karl; Mozharovskyi, Pavlo
  9. On the construction of two-country cointegrated VAR models with an application to the UK and US By Heinlein, Reinhold; Krolzig, Hans-Martin
  10. Nonparametric identification of dynamic treatment effects in competing risks models By Drepper, Bettina; Effraimidis, Georgios
  11. Predicting quarterly aggregates with monthly indicators By Winkelried, Diego

  1. By: Hanck, Christoph; Demetrescu, Matei; Tarcolea, Adina
    Abstract: While the limiting null distributions of cointegration tests are invariant to a certain amount of conditional heteroskedasticity as long as global homoskedasticity conditions are fulfilled, they are certainly affected when the innovations exhibit time-varying volatility. Worse yet, distortions from single units accumulate in panels, where one must anyway pay special attention to dependence among cross-sectional units, be it time-dependent or not. To obtain a panel cointegration test robust to both global heteroskedasticity and cross-unit dependence, we start by adapting the nonlinear instruments method proposed for the Dickey-Fuller test by Chang (J of Econometrics 110, 261--292) to an error-correction testing framework. We show that IV-based testing of the null of no error-correction in individual equations results in asymptotic standard normality of the test statistic as long as the t-type statistics are computed with White heteroskedasticity-consistent standard errors. Remarkably, the result holds even in the presence of endogenous regressors, irrespective of the number of integrated covariates, and for any variance profile. Furthermore, a test for the null of no cointegration---in effect, a joint test against no error correction in any equation of each unit---retains the nice properties of the univariate tests. In panels with fixed cross-sectional dimension, both types of test statistics from individual units are shown to be asymptotically independent even in the presence of correlation or cointegration across units, leading to a panel test statistic robust to cross-unit dependence and unconditional heteroskedasticity. The tests perform well in panels of usual dimensions with innovations exhibiting variance breaks and a factor structure. --
    JEL: C12 C22 C23
    Date: 2012
  2. By: Bartolucci, Francesco; Grilli, Leonardo; Pieroni, Luca
    Abstract: We consider estimation of the causal effect of a sequential binary treatment (typically corresponding to a policy or a subsidy in the economic context) on a final outcome, when the treatment assignment at a given occasion depends on the sequence of previous assignments as well as on time-varying confounders. In this case, a popular modeling strategy is represented by Marginal Structural Models; within this approach, the causal effect of the treatment is estimated by the Inverse Probability Weighting (IPW) estimator, which is consistent provided that all the confounders are observed (sequential ignorability). To alleviate this serious limitation, we propose a new estimator, called Latent Class Inverse Probability Weighting (LC-IPW), which is based on two steps: first, a finite mixture model is fitted in order to compute latent-class-specific weights; then, these weights are used to fit the Marginal Structural Model of interest. A simulation study shows that the LC-IPW estimator outperforms the IPW estimator for all the considered configurations, even in cases of no unobserved confounding. The proposed approach is applied to the estimation of the causal effect of wage subsidies on employment, using a dataset of Finnish firms observed for eight years. The LC-IPW estimate confirms the existence of a positive effect, but its magnitude is nearly halved with respect to the IPW estimate, pointing out the substantial role of unobserved confounding in this setting.
    Keywords: Causal inference; Longitudinal design; Mixture model; Potential outcomes; Sequential treatment
    JEL: C52 H25 C33
    Date: 2012–10–08
  3. By: Delis, Manthos D; Iosifidi, Maria; Tsionas, Efthymios
    Abstract: This article proposes a general empirical method for the estimation of marginal cost of individual firms. The new method employs the smooth coefficient model, which has a number of appealing features when applied to cost functions. The empirical analysis uses data from a unique sample from which we observe marginal cost. We compare the estimates from the proposed method with the true values of marginal cost, and the estimates of marginal cost that we obtain through conventional parametric methods. We show that the proposed method produces estimated values of marginal cost that very closely approximate the true values of marginal cost. In contrast, the results from conventional parametric methods are significantly biased and provide invalid inference.
    Keywords: Estimation of marginal cost; Parametric models; Smooth coefficient model; Actual and simulated data
    JEL: C14 C81 Q40 D24 G21
    Date: 2012–12–01
  4. By: Roy Cerqueti (University of Macerata); Paolo Falbo (University of Brescia); Cristian Pelizzari (University of Brescia); Federica Ricca (Sapienza University of Rome); Andrea Scozzari (University Niccolo' Cusano, Rome)
    Abstract: Bootstrapping time series is one of the most acknowledged tools to make forecasts and study the statistical properties of an evolutive phenomenon. The idea underlying this procedure is to replicate the phenomenon on the basis of an observed sample. One of the most important classes of bootstrap procedures is based on the assumption that the sampled phenomenon evolves according to a Markov chain. Such an assumption does not apply when the process takes values in a continuous set, as frequently happens for time series related to economic and financial variables. In this paper we apply Markov chain theory for bootstrapping continuous processes, relying on the idea of discretizing the support of the process and suggesting Markov chains of order k to model the evolution of the time series under study. The difficulty of this approach is that, even for small k, the number of rows of the transition probability matrix is too large, and this leads to a bootstrap procedure of high complexity. In many practical cases such complexity is not fully justified by the information really required to replicate a phenomenon satisfactorily. In this paper we propose a methodology to reduce the number of rows without loosing ``too much'' information on the process evolution. This requires a clustering of the rows that preserves as much as possible the ``law'' that originally generated the process. The novel aspect of our work is the use of Mixed Integer Linear Programming for formulating and solving the problem of clustering similar rows in the original transition probability matrix. Even if it is well known that this problem is computationally hard, in our application medium size real-life instances were solved efficiently. Our empirical analysis, which is done on two time series of prices from the German and the Spanish electricity markets, shows that the use of the aggregated transition probability matrix does not affect the bootstrapping procedure, since the characteristic features of the original series are maintained in the resampled ones.
    Keywords: Continuous Markov processes,,Time series bootstrapping.,Mixed Integer Linear Programming,,Markov chains,
    Date: 2012–11
  5. By: Halkos, George; Tsilika, Kyriaki
    Abstract: Examining the identification problem in the context of a linear econometric model can be a tedious task. The order condition of identifiability is an easy condition to compute, though difficult to remember. The application of the rank condition, due to its complicated definition and its computational demands, is time consuming and contains a high risk for errors. Furthermore, possible miscalculations could lead to wrong identification results, which cannot be revealed by other indications. Thus, a safe way to test identification criteria is to make use of computer software. Specialized econometric software can off-load some of the requested computations but the procedure of formation and verification of the identification criteria are still up to the user. In our identification study we use the program editor of a free computer algebra system, Xcas. We present a routine that tests various identification conditions and classifies the equations under study as «under-identified», «just-identified», «over-identified» and «unidentified», in just one entry.
    Keywords: Simultaneous equation models; order condition of identifiability; rank condition of identifiability; computer algebra system Xcas
    JEL: C51 C10 C63 C30
    Date: 2012
  6. By: Baumöhl, Eduard; Lyócsa, Štefan
    Abstract: The weekly returns of equities are commonly used in the empirical research to avoid the non-synchronicity of daily data. An empirical analysis is used to show that the statistical properties of a weekly stock returns series strongly depend on the method used to construct this series. Three types of weekly returns construction are considered: (i) Wednesday-to-Wednesday, (ii) Friday-to-Friday, and (iii) averaging daily observations within the corresponding week. Considerable distinctions are found between these procedures using data from the S&P500 and DAX stock market indices. Differences occurred in the unit-root tests, identified volatility breaks, unconditional correlations, ARMA-GARCH and DCC MV-GARCH models as well. Our findings provide evidence that the method employed for constructing weekly stock returns can have a decisive effect on the outcomes of empirical studies.
    Keywords: stock markets; weekly returns; statistical properties
    JEL: C10 G10 C80
    Date: 2012–12–26
  7. By: Joshua Angrist; Miikka Rokkanen
    Abstract: In the canonical regression discontinuity (RD) design for applicants who face an award or admissions cutoff, causal effects are nonparametrically identified for those near the cutoff. The impact of treatment on inframarginal applicants is also of interest, but identification of such effects requires stronger assumptions than are required for identification at the cutoff. This paper discusses RD identification away from the cutoff. Our identification strategy exploits the availability of dependent variable predictors other than the running variable. Conditional on these predictors, the running variable is assumed to be ignorable. This identification strategy is illustrated with data on applicants to Boston exam schools. Functional-form-based extrapolation generates unsatisfying results in this context, either noisy or not very robust. By contrast, identification based on RD-specific conditional independence assumptions produces reasonably precise and surprisingly robust estimates of the effects of exam school attendance on inframarginal applicants. These estimates suggest that the causal effects of exam school attendance for 9th grade applicants with running variable values well away from admissions cutoffs differ little from those for applicants with values that put them on the margin of acceptance. An extension to fuzzy designs is shown to identify causal effects for compliers away from the cutoff.
    JEL: C26 C31 C36 I21 I24 I28 J24
    Date: 2012–12
  8. By: Lange, Tatjana; Mosler, Karl; Mozharovskyi, Pavlo
    Abstract: A new procedure, called DD-procedure, is developed to solve the problem of classifying d-dimensional objects into q Ï 2 classes. The procedure is completely nonparametric; it uses q-dimensional depth plots and a very efficient algorithm for discrimination analysis in the depth space [0, 1]q . Specifically, the depth is the zonoid depth, and the algorithm is the procedure. In case of more than two classes several binary classifications are performed and a majority rule is applied. Special treatments are discussed for outsiders, that is, data having zero depth vector. The DD-classifier is applied to simulated as well as real data, and the results are compared with those of similar procedures that have been recently proposed. In most cases the new procedure has comparable error rates, but is much faster than other classification approaches, including the SVM. --
    Keywords: Alpha-procedure,zonoid depth,DD-plot,pattern recognition,supervised learning,misclassification rate
    Date: 2012
  9. By: Heinlein, Reinhold; Krolzig, Hans-Martin
    Abstract: In this paper we introduce a cointegrated VAR modelling approach for two-country macro dynamics. In order to tackle the curse of dimensionality resulting from the number of variables in multi-country models, we investigate the applicability of the approach by Aoki (1981) frequently used in economic theory. Aoki showed that for a system of linear differential equations, the assumption of country symmetry allows to decouple the dynamics of country averages and country differences into two autonomous subsystems. While this approach can not be applied straightforwardly to economic time series, we generalize Aoki s approach and demonstrate how it can be utilized for the determination of the long-run properties of the system. Symmetry is rejected for the short-run, thus for the given cointegration vectors the final modelling stage is based on the full two-country system. The econometric modelling approach is then enhanced by a general-to-specific model selection procedure, where the VAR based cointegration analysis is combined with a graph-theoretic search for instantaneous causal relations and an automatic general-to-specific reduction of the vector equilibrium correction model. As an application we build up a macro-econometric two-country model for the UK and the US. The empirical study focusses on the effects of monetary policy on the $/ exchange rate. We find interest rate shocks in the UK cause much stronger exchange rate effects than an unanticipated interest rate change by the Fed. --
    JEL: C22 C32 C50
    Date: 2012
  10. By: Drepper, Bettina; Effraimidis, Georgios
    Abstract: We introduce a dynamic treatment to the mixed proportional hazard competing risks model and allow for selection on unobservables. Our model can e.g. be used to evaluate the effect of benefit sanctions on the transition rate out of unemployment when more than one exit risk is of interest. Imposing a benefit sanction will influence the transition rate to employment. However, the sanction can also affect the decision of an individual to exit the labor force. The latter effect is often ignored in empirical work. In this paper we present a general model which allows to identify different effects of a treatment such as a sanction on several competing exit risks such as 'finding work' vs. 'exit the labor force'. Our approach exploits the timing at which the individual enters into treatment by adding the hazard rate of the duration to treatment as an additional equation to the competing risks model. We present a new identification result of this model for single-spell duration data. Furthermore, we intend to include an empirical application in this paper to illustrate the estimation procedure. --
    JEL: C41 C31 J64
    Date: 2012
  11. By: Winkelried, Diego (Central Reserve Bank of Peru)
    Abstract: Many important macroeconomic variables measuring the state of the economy are sampled quarterly and with publication lags, although potentially useful predictors are observed at a higher frequency almost in real time. This situation poses the challenge of how to best use the available data to infer the state of the economy. This paper explores the merits of the so-called Mixed Data Sampling (MIDAS) approach that directly exploits the information content of monthly indicators to predict quarterly Peruvian macroeconomic aggregates. To this end, we propose a simple extension, based on the notion of smoothness priors in a distributed lag model, that weakens the restrictions the traditional MIDAS approach imposes on the data to achieve parsimony. We also discuss the workings of an averaging scheme that combines predictions coming from non-nested specifications. It is found that the MIDAS approach is able to timely identify, from monthly information, important signals of the dynamics of the quarterly aggregates. Thus, it can deliver significant gains in prediction accuracy, compared to the performance of competing models that use exclusively quarterly information.
    Keywords: Mixed-frequency data, MIDAS, model averaging, nowcasting, backcasting
    JEL: C22 C53 E27
    Date: 2012–12

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.