
on Econometrics 
By:  Hanck, Christoph; Demetrescu, Matei; Tarcolea, Adina 
Abstract:  While the limiting null distributions of cointegration tests are invariant to a certain amount of conditional heteroskedasticity as long as global homoskedasticity conditions are fulfilled, they are certainly affected when the innovations exhibit timevarying volatility. Worse yet, distortions from single units accumulate in panels, where one must anyway pay special attention to dependence among crosssectional units, be it timedependent or not. To obtain a panel cointegration test robust to both global heteroskedasticity and crossunit dependence, we start by adapting the nonlinear instruments method proposed for the DickeyFuller test by Chang (J of Econometrics 110, 261292) to an errorcorrection testing framework. We show that IVbased testing of the null of no errorcorrection in individual equations results in asymptotic standard normality of the test statistic as long as the ttype statistics are computed with White heteroskedasticityconsistent standard errors. Remarkably, the result holds even in the presence of endogenous regressors, irrespective of the number of integrated covariates, and for any variance profile. Furthermore, a test for the null of no cointegrationin effect, a joint test against no error correction in any equation of each unitretains the nice properties of the univariate tests. In panels with fixed crosssectional dimension, both types of test statistics from individual units are shown to be asymptotically independent even in the presence of correlation or cointegration across units, leading to a panel test statistic robust to crossunit dependence and unconditional heteroskedasticity. The tests perform well in panels of usual dimensions with innovations exhibiting variance breaks and a factor structure.  
JEL:  C12 C22 C23 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:zbw:vfsc12:62072&r=ecm 
By:  Bartolucci, Francesco; Grilli, Leonardo; Pieroni, Luca 
Abstract:  We consider estimation of the causal effect of a sequential binary treatment (typically corresponding to a policy or a subsidy in the economic context) on a final outcome, when the treatment assignment at a given occasion depends on the sequence of previous assignments as well as on timevarying confounders. In this case, a popular modeling strategy is represented by Marginal Structural Models; within this approach, the causal effect of the treatment is estimated by the Inverse Probability Weighting (IPW) estimator, which is consistent provided that all the confounders are observed (sequential ignorability). To alleviate this serious limitation, we propose a new estimator, called Latent Class Inverse Probability Weighting (LCIPW), which is based on two steps: first, a finite mixture model is fitted in order to compute latentclassspecific weights; then, these weights are used to fit the Marginal Structural Model of interest. A simulation study shows that the LCIPW estimator outperforms the IPW estimator for all the considered configurations, even in cases of no unobserved confounding. The proposed approach is applied to the estimation of the causal effect of wage subsidies on employment, using a dataset of Finnish firms observed for eight years. The LCIPW estimate confirms the existence of a positive effect, but its magnitude is nearly halved with respect to the IPW estimate, pointing out the substantial role of unobserved confounding in this setting. 
Keywords:  Causal inference; Longitudinal design; Mixture model; Potential outcomes; Sequential treatment 
JEL:  C52 H25 C33 
Date:  2012–10–08 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:43430&r=ecm 
By:  Delis, Manthos D; Iosifidi, Maria; Tsionas, Efthymios 
Abstract:  This article proposes a general empirical method for the estimation of marginal cost of individual firms. The new method employs the smooth coefficient model, which has a number of appealing features when applied to cost functions. The empirical analysis uses data from a unique sample from which we observe marginal cost. We compare the estimates from the proposed method with the true values of marginal cost, and the estimates of marginal cost that we obtain through conventional parametric methods. We show that the proposed method produces estimated values of marginal cost that very closely approximate the true values of marginal cost. In contrast, the results from conventional parametric methods are significantly biased and provide invalid inference. 
Keywords:  Estimation of marginal cost; Parametric models; Smooth coefficient model; Actual and simulated data 
JEL:  C14 C81 Q40 D24 G21 
Date:  2012–12–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:43514&r=ecm 
By:  Roy Cerqueti (University of Macerata); Paolo Falbo (University of Brescia); Cristian Pelizzari (University of Brescia); Federica Ricca (Sapienza University of Rome); Andrea Scozzari (University Niccolo' Cusano, Rome) 
Abstract:  Bootstrapping time series is one of the most acknowledged tools to make forecasts and study the statistical properties of an evolutive phenomenon. The idea underlying this procedure is to replicate the phenomenon on the basis of an observed sample. One of the most important classes of bootstrap procedures is based on the assumption that the sampled phenomenon evolves according to a Markov chain. Such an assumption does not apply when the process takes values in a continuous set, as frequently happens for time series related to economic and financial variables. In this paper we apply Markov chain theory for bootstrapping continuous processes, relying on the idea of discretizing the support of the process and suggesting Markov chains of order k to model the evolution of the time series under study. The difficulty of this approach is that, even for small k, the number of rows of the transition probability matrix is too large, and this leads to a bootstrap procedure of high complexity. In many practical cases such complexity is not fully justified by the information really required to replicate a phenomenon satisfactorily. In this paper we propose a methodology to reduce the number of rows without loosing ``too much'' information on the process evolution. This requires a clustering of the rows that preserves as much as possible the ``law'' that originally generated the process. The novel aspect of our work is the use of Mixed Integer Linear Programming for formulating and solving the problem of clustering similar rows in the original transition probability matrix. Even if it is well known that this problem is computationally hard, in our application medium size reallife instances were solved efficiently. Our empirical analysis, which is done on two time series of prices from the German and the Spanish electricity markets, shows that the use of the aggregated transition probability matrix does not affect the bootstrapping procedure, since the characteristic features of the original series are maintained in the resampled ones. 
Keywords:  Continuous Markov processes,,Time series bootstrapping.,Mixed Integer Linear Programming,,Markov chains, 
Date:  2012–11 
URL:  http://d.repec.org/n?u=RePEc:mcr:wpdief:wpaper00067&r=ecm 
By:  Halkos, George; Tsilika, Kyriaki 
Abstract:  Examining the identification problem in the context of a linear econometric model can be a tedious task. The order condition of identifiability is an easy condition to compute, though difficult to remember. The application of the rank condition, due to its complicated definition and its computational demands, is time consuming and contains a high risk for errors. Furthermore, possible miscalculations could lead to wrong identification results, which cannot be revealed by other indications. Thus, a safe way to test identification criteria is to make use of computer software. Specialized econometric software can offload some of the requested computations but the procedure of formation and verification of the identification criteria are still up to the user. In our identification study we use the program editor of a free computer algebra system, Xcas. We present a routine that tests various identification conditions and classifies the equations under study as «underidentified», «justidentified», «overidentified» and «unidentified», in just one entry. 
Keywords:  Simultaneous equation models; order condition of identifiability; rank condition of identifiability; computer algebra system Xcas 
JEL:  C51 C10 C63 C30 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:43467&r=ecm 
By:  Baumöhl, Eduard; Lyócsa, Štefan 
Abstract:  The weekly returns of equities are commonly used in the empirical research to avoid the nonsynchronicity of daily data. An empirical analysis is used to show that the statistical properties of a weekly stock returns series strongly depend on the method used to construct this series. Three types of weekly returns construction are considered: (i) WednesdaytoWednesday, (ii) FridaytoFriday, and (iii) averaging daily observations within the corresponding week. Considerable distinctions are found between these procedures using data from the S&P500 and DAX stock market indices. Differences occurred in the unitroot tests, identified volatility breaks, unconditional correlations, ARMAGARCH and DCC MVGARCH models as well. Our findings provide evidence that the method employed for constructing weekly stock returns can have a decisive effect on the outcomes of empirical studies. 
Keywords:  stock markets; weekly returns; statistical properties 
JEL:  C10 G10 C80 
Date:  2012–12–26 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:43431&r=ecm 
By:  Joshua Angrist; Miikka Rokkanen 
Abstract:  In the canonical regression discontinuity (RD) design for applicants who face an award or admissions cutoff, causal effects are nonparametrically identified for those near the cutoff. The impact of treatment on inframarginal applicants is also of interest, but identification of such effects requires stronger assumptions than are required for identification at the cutoff. This paper discusses RD identification away from the cutoff. Our identification strategy exploits the availability of dependent variable predictors other than the running variable. Conditional on these predictors, the running variable is assumed to be ignorable. This identification strategy is illustrated with data on applicants to Boston exam schools. Functionalformbased extrapolation generates unsatisfying results in this context, either noisy or not very robust. By contrast, identification based on RDspecific conditional independence assumptions produces reasonably precise and surprisingly robust estimates of the effects of exam school attendance on inframarginal applicants. These estimates suggest that the causal effects of exam school attendance for 9th grade applicants with running variable values well away from admissions cutoffs differ little from those for applicants with values that put them on the margin of acceptance. An extension to fuzzy designs is shown to identify causal effects for compliers away from the cutoff. 
JEL:  C26 C31 C36 I21 I24 I28 J24 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:18662&r=ecm 
By:  Lange, Tatjana; Mosler, Karl; Mozharovskyi, Pavlo 
Abstract:  A new procedure, called DDprocedure, is developed to solve the problem of classifying ddimensional objects into q Ï 2 classes. The procedure is completely nonparametric; it uses qdimensional depth plots and a very efficient algorithm for discrimination analysis in the depth space [0, 1]q . Specifically, the depth is the zonoid depth, and the algorithm is the procedure. In case of more than two classes several binary classifications are performed and a majority rule is applied. Special treatments are discussed for outsiders, that is, data having zero depth vector. The DDclassifier is applied to simulated as well as real data, and the results are compared with those of similar procedures that have been recently proposed. In most cases the new procedure has comparable error rates, but is much faster than other classification approaches, including the SVM.  
Keywords:  Alphaprocedure,zonoid depth,DDplot,pattern recognition,supervised learning,misclassification rate 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:zbw:ucdpse:112&r=ecm 
By:  Heinlein, Reinhold; Krolzig, HansMartin 
Abstract:  In this paper we introduce a cointegrated VAR modelling approach for twocountry macro dynamics. In order to tackle the curse of dimensionality resulting from the number of variables in multicountry models, we investigate the applicability of the approach by Aoki (1981) frequently used in economic theory. Aoki showed that for a system of linear differential equations, the assumption of country symmetry allows to decouple the dynamics of country averages and country differences into two autonomous subsystems. While this approach can not be applied straightforwardly to economic time series, we generalize Aoki s approach and demonstrate how it can be utilized for the determination of the longrun properties of the system. Symmetry is rejected for the shortrun, thus for the given cointegration vectors the final modelling stage is based on the full twocountry system. The econometric modelling approach is then enhanced by a generaltospecific model selection procedure, where the VAR based cointegration analysis is combined with a graphtheoretic search for instantaneous causal relations and an automatic generaltospecific reduction of the vector equilibrium correction model. As an application we build up a macroeconometric twocountry model for the UK and the US. The empirical study focusses on the effects of monetary policy on the $/ exchange rate. We find interest rate shocks in the UK cause much stronger exchange rate effects than an unanticipated interest rate change by the Fed.  
JEL:  C22 C32 C50 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:zbw:vfsc12:62310&r=ecm 
By:  Drepper, Bettina; Effraimidis, Georgios 
Abstract:  We introduce a dynamic treatment to the mixed proportional hazard competing risks model and allow for selection on unobservables. Our model can e.g. be used to evaluate the effect of benefit sanctions on the transition rate out of unemployment when more than one exit risk is of interest. Imposing a benefit sanction will influence the transition rate to employment. However, the sanction can also affect the decision of an individual to exit the labor force. The latter effect is often ignored in empirical work. In this paper we present a general model which allows to identify different effects of a treatment such as a sanction on several competing exit risks such as 'finding work' vs. 'exit the labor force'. Our approach exploits the timing at which the individual enters into treatment by adding the hazard rate of the duration to treatment as an additional equation to the competing risks model. We present a new identification result of this model for singlespell duration data. Furthermore, we intend to include an empirical application in this paper to illustrate the estimation procedure.  
JEL:  C41 C31 J64 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:zbw:vfsc12:66060&r=ecm 
By:  Winkelried, Diego (Central Reserve Bank of Peru) 
Abstract:  Many important macroeconomic variables measuring the state of the economy are sampled quarterly and with publication lags, although potentially useful predictors are observed at a higher frequency almost in real time. This situation poses the challenge of how to best use the available data to infer the state of the economy. This paper explores the merits of the socalled Mixed Data Sampling (MIDAS) approach that directly exploits the information content of monthly indicators to predict quarterly Peruvian macroeconomic aggregates. To this end, we propose a simple extension, based on the notion of smoothness priors in a distributed lag model, that weakens the restrictions the traditional MIDAS approach imposes on the data to achieve parsimony. We also discuss the workings of an averaging scheme that combines predictions coming from nonnested specifications. It is found that the MIDAS approach is able to timely identify, from monthly information, important signals of the dynamics of the quarterly aggregates. Thus, it can deliver significant gains in prediction accuracy, compared to the performance of competing models that use exclusively quarterly information. 
Keywords:  Mixedfrequency data, MIDAS, model averaging, nowcasting, backcasting 
JEL:  C22 C53 E27 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:rbp:wpaper:2012023&r=ecm 