nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒11‒01
sixteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Asymptotically Honest Confidence Regions for High Dimensional Parameters by the Desparsified Conservative Lasso By Mehmet Caner; Anders Bredahl Kock
  2. Estimating the Spot Covariation of Asset Prices – Statistical Theory and Empirical Evidence By Markus Bibinger; Markus Reiss; Nikolaus Hautsch; Peter Malec;
  3. Bagging Constrained Equity Premium Predictors By Tae-Hwy Lee; Eric Hillebrand; Marcelo Medeiros
  4. Unbalanced Fractional Cointegration and the No-Arbitrage Condition on Commodity Markets By Gilles De Truchis; Florent Dubois
  5. Evaluating Conditional Forecasts from Vector Autoregressions By Clark, Todd E.; McCracken, Michael W.
  6. A Note on Regressions with Interval Data on a Regressor By Daniel Cerquera; François Laisney; Hannes Ullrich
  7. Expectile Treatment Effects: An efficient alternative to compute the distribution of treatment effects By Stephan Stahlschmidt; Matthias Eckardt; Wolfgang K. Härdle;
  8. Asymptotic theory for cointegration analysis when the cointegration rank is deficient By David Bernstein; Bent Nielsen
  9. Testing for no factor structures: on the use of average-type and Hausman-type statistics By Carolina Castagnetti; Eduardo Rossi; Lorenzo Trapani
  10. The Quantile Performance Of Statistical Treatment Rules Using Hypothesis Tests To Allocate A Population To Two Treatments By Charles F. Manski; Aleksey Tetenov
  11. A Simple and Successsful Shrinkage Method for Weighting Estimators of Treatment Effects By Pohlmeier, Winfried; Seiberlich, Ruben; Uysal, Selver Derya
  12. Monte Carlo Approximate Tensor Moment Simulations By Juan C. Arismendi; Herbert Kimura
  13. Incompatibility of estimation and policy objectives. An example from small-area estimation By Nicholas Longford
  14. Realized Volatility as an Instrument to Official Intervention By João Barata R. B. Barroso
  15. Labor supply as a discrete choice among latent jobs: Unobserved heterogeneity and identification By John K. Dagsvik; Zhiyang Jia
  16. Dynamic Prediction Pools: An Investigation of Financial Frictions and Forecasting Performance By Marco Del Negro; Raiden B. Hasegawa; Frank Schorfheide

  1. By: Mehmet Caner (North Carolina State University); Anders Bredahl Kock (Aarhus University and CREATES)
    Abstract: While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However, in a recent paper van de Geer et al. (2014) showed how the Lasso can be desparsified in order to create asymptotically honest (uniform) confidence band. In this paper we consider the conservative Lasso which penalizes more correctly than the Lasso and hence has a lower estimation error. In particular, we develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow for heteroskedastic non-subgaussian error terms and covariates. Next, we desparsify the conservative Lasso estimator and derive the asymptotic distribution of tests involving an increasing number of parameters. As a stepping stone towards this, we also provide a feasible uniformly consistent estimator of the asymptotic covariance matrix of an increasing number of parameters which is robust against conditional heteroskedasticity. To our knowledge we are the first to do so. Next, we show that our confidence bands are honest over sparse high-dimensional sub vectors of the parameter space and that they contract at the optimal rate. All our results are valid in high-dimensional models. Our simulations reveal that the desparsified conservative Lasso estimates the parameters much more precisely than the desparsified Lasso, has much better size properties and produces confidence bands with markedly superior coverage rates.
    Keywords: conservative Lasso, honest inference, high-dimensional data, uniform inference, confidence intervals, tests.
    JEL: C12 C13 C21
    Date: 2014–10–15
  2. By: Markus Bibinger; Markus Reiss; Nikolaus Hautsch; Peter Malec;
    Abstract: We propose a new estimator for the spot covariance matrix of a multi-dimensional continuous semi-martingale log asset price process which is subject to noise and non-synchronous observations. The estimator is constructed based on a local average of block-wise parametric spectral covariance estimates. The latter originate from a local method of moments (LMM) which recently has been introduced by Bibinger et al. (2014). We extend the LMM estimator to allow for autocorrelated noise and propose a method to adaptively infer the autocorrelations from the data. We prove the consistency and asymptotic normality of the proposed spot covariance estimator. Based on extensive simulations we provide empirical guidance on the optimal implementation of the estimator and apply it to high-frequency data of a cross-section of NASDAQ blue chip stocks. Employing the estimator to estimate spot covariances, correlations and betas in normal but also extreme-event periods yields novel insights into intraday covariance and correlation dynamics. We show that intraday (co-)variations (i) follow underlying periodicity patterns, (ii) reveal substantial intraday variability associated with (co-)variation risk, (iii) are strongly serially correlated, and (iv) can increase strongly and nearly instantaneously if new information arrives.
    Keywords: local method of moments, spot covariance, smoothing, intraday (co-)variation risk
    JEL: C58 C14 C32
    Date: 2014–10
  3. By: Tae-Hwy Lee (Department of Economics, University of California Riverside); Eric Hillebrand (Aarhus University); Marcelo Medeiros (Pontifical Catholic University of Rio de Janeiro)
    Abstract: The literature on excess return prediction has considered a wide array of estimation schemes, among them unrestricted and restricted regression coefficients. We consider bootstrap aggregation (bagging) to smooth parameter restrictions. Two types of restrictions are considered: positivity of the regression coefficient and positivity of the forecast. Bagging constrained estimators can have smaller asymptotic mean-squared prediction errors than forecasts from a restricted model without bagging. Monte Carlo simulations show that forecast gains can be achieved in realistic sample sizes for the stock return problem. In an empirical application using the data set of Campbell, J., and S. Thompson (2008): "Predicting the Equity Premium Out of Sample: Can Anything Beat the Historical Average?", Review of Financial Studies 21, 1511-1531, we show that we can improve the forecast performance further by smoothing the restriction through bagging.
    Keywords: Constraints on predictive regression function; Bagging; Asymptotic MSE; Equity premium; Out-of-sample forecasting; Economic value functions.
    JEL: C5 E4 G1
    Date: 2014–09
  4. By: Gilles De Truchis (AMSE - Aix-Marseille School of Economics - Aix-Marseille Univ. - Centre national de la recherche scientifique (CNRS) - École des Hautes Études en Sciences Sociales (EHESS) - Ecole Centrale Marseille (ECM)); Florent Dubois (AMSE - Aix-Marseille School of Economics - Aix-Marseille Univ. - Centre national de la recherche scientifique (CNRS) - École des Hautes Études en Sciences Sociales (EHESS) - Ecole Centrale Marseille (ECM))
    Abstract: Technical abstract: A necessary condition for two time series to be nontrivially cointegrated is the equality of their respective integration orders. Nonetheless, in some cases, the apparent unbalance of integration orders of the observables can be misleading and the cointegration theory applies all the same. This situation refers to unbalanced cointegration in the sense that balanced long run relationship can be recovered by an appropriate filtering of one of the time series. In this paper, we suggest a local Whittle estimator of bivariate unbalanced fractional cointegration systems. Focusing on a degenerating band around the origin, it estimates jointly the unbalance parameter, the long run coefficient and the integration orders of the regressor and the cointegrating errors. Its consistency is demonstrated for the stationary regions of the parameter space and a finite sample analysis is conducted by means of Monte Carlo experiments. An application to the no-arbitrage condition between crude oil spot and futures prices is proposed to illustrate the empirical relevance of the developed estimator. Non-technical abstract: The no-arbitrage condition between spot and future prices implies an analogous condition on their underlying volatilities. Interestingly, the long memory behavior of the volatility series also involves a long-run relationship that allows to test for the no-arbitrage condition by means of cointegration techniques. Unfortunately, the persistent nature of the volatility can vary with the future maturity, thereby leading to unbalanced integration orders between spot and future volatility series. Nonetheless, if a balanced long-run relationship can be recovered by an appropriate filtering of one of the time series, the cointegration theory applies all the same and unbalanced cointegration operates between the raw series. In this paper, we introduce a new estimator of unbalanced fractional cointegration systems that allows to test for the no-arbitrage condition between the crude oil spot and futures volatilities.
    Keywords: unbalanced cointegration; fractional cointegration; no-arbitrage condition; local Whittle likelihood; commodity markets
    Date: 2014–09
  5. By: Clark, Todd E. (Federal Reserve Bank of St. Louis); McCracken, Michael W. (Federal Reserve Bank of St. Louis)
    Abstract: Many forecasts are conditional in nature. For example, a number of central banks routinely report forecasts conditional on particular paths of policy instruments. Even though conditional forecasting is common, there has been little work on methods for evaluating conditional forecasts. This paper provides analytical, Monte Carlo, and empirical evidence on tests of predictive ability for conditional forecasts from estimated models. In the empirical analysis, we consider forecasts of growth, unemployment, and inflation from a VAR, based on conditions on the short-term interest rate. Throughout the analysis, we focus on tests of bias, efficiency, and equal accuracy applied to conditional forecasts from VAR models.
    Keywords: Prediction; forecastingf out-of-sample
    JEL: C12 C32 C52 C53
    Date: 2014–09–01
  6. By: Daniel Cerquera; François Laisney; Hannes Ullrich
    Abstract: Motivated by Manski and Tamer (2002) and especially their partial identification analysis of the regression model where one covariate is only interval-measured, we present two extensions. Manski and Tamer (2002) propose two estimation approaches in this context, focussing on general results. The modified minimum distance (MMD) estimates the true identified set and the modified method of moments (MMM) a superset. Our first contribution is to characterize the true identified set and the superset. Second, we complete and extend the Monte Carlo study of Manski and Tamer (2002). We present benchmark results using the exact functional form for the expectation of the dependent variable conditional on observables to compare with results using its nonparametric estimate, and illustrate the superiority of MMD over MMM.
    Keywords: Partial identification, true identified set, superset, MMD, MMM
    JEL: C01 C13 C40
    Date: 2014
  7. By: Stephan Stahlschmidt; Matthias Eckardt; Wolfgang K. Härdle;
    Abstract: The distribution of treatment eects extends the prevailing focus on average treatment eects to the tails of the outcome variable and quantile treatment eects denote the predominant technique to compute those eects in the presence of a confounding mechanism. The underlying quantile regression is based on a L1{loss function and we propose the technique of expectile treatment eects, which relies on expectile regression with its L2{loss function. It is shown, that apart from the extreme tail ends expectile treatment eects provide more ecient estimates and these theoretical results are broadened by a simulation and subsequent analysis of the classic LaLonde data. Whereas quantile and expectile treatment eects perform comparably on extreme tail locations, the variance of the expectile variant amounts in our simulation on all other locations to less than 80% of its quantile equivalent and under favourable conditions to less than 2=3. In the LaLonde data expectile treatment eects reduce the variance by more than a quarter, while at the same time smoothing the treatment eects considerably.
    Keywords: distributional treatment eects, eciency, expectile treatment eects, LaLonde data, quantile treatment eects
    JEL: C21 C31 C54 J64
    Date: 2014–09
  8. By: David Bernstein (Dept of Economics, University of Miami); Bent Nielsen (Dept of Economics and Nuffield College, Oxford University)
    Abstract: We consider cointegration tests in the situation where the cointegration rank is decient. This situation is of interest in nite sample analysis and in relation to recent work on identication robust cointegration inference. We derive asymptotic theory for tests for cointegration rank and for hypotheses on the cointegrating vectors. The limiting distributions are tabulated. An application to US treasury yields series is given.
    Keywords: Cointegration, rank deciency, weak identication.
    Date: 2014–10–01
  9. By: Carolina Castagnetti (Department of Economics and Management, University of Pavia); Eduardo Rossi (Department of Economics and Management, University of Pavia); Lorenzo Trapani (Faculty of Finance,Cass Business School, City University, London (UK))
    Abstract: Castagnetti, Rossi and Trapani (2014) propose two max-type statistics to test for the presence of a factor structure in a large stationary panel data model. We investigate the use of alternative approaches as average-type and Hausman-type statistics. We show that both approaches can not be used. The average-type statistics diverge under the null, while the tests based on the Hausman principle are inconsistent.
    Keywords: Large Panels, Testing for Factor Structure, Hausman-type test.
    JEL: C12 C33
    Date: 2014–10
  10. By: Charles F. Manski; Aleksey Tetenov
    Abstract: This paper modifies the Wald development of statistical decision theory to offer new perspective on the performance of certain statistical treatment rules. We study the quantile performance of test rules, ones that use the outcomes of hypothesis tests to allocate a population to two treatments. Let lambda denote the quantile used to evaluate performance. Define a test rule to be lambda-quantile optimal if it maximizes lambda-quantile welfare in every state of nature. We show that a test rule is lambda-quantile optimal if and only if its error probabilities are less than lambda in all states where the two treatments yield different welfare. We give conditions under which lambda-quantile optimal test rules do and do not exist. We find that optimal rules exist when the state space is finite and the data enable sufficiently precise estimation of the true state. Optimal rules do not exist when the state space is connected and other regularity conditions hold. These nuanced findings differ sharply from measurement of mean performance, as mean optimal test rules generically do not exist. We present further analysis that holds when the data are real-valued and generated by a sampling distribution which satisfies the monotone-likelihood ratio (MLR) property with respect to the average treatment effect. We use the MLR property to characterize the stochastic-dominance admissibility of STRs when the data have a continuous distribution and then generate findings on the quantile admissibility of test rules.
    Keywords: treatment choice; quantiles; stochastic dominance; monotone likelihood ratio
    JEL: C12 C44
    Date: 2014
  11. By: Pohlmeier, Winfried (University of Konstanz); Seiberlich, Ruben (University of Konstanz); Uysal, Selver Derya (Institute for Advanced Studies, Vienna)
    Abstract: A simple shrinkage method is proposed to improve the performance of weighting estimators of the average treatment effect. As the weights in these estimators can become arbitrarily large for the propensity scores close to the boundaries, three different variants of a shrinkage method for the propensity scores are analyzed. The results of a comprehensive Monte Carlo study demonstrate that this simple method substantially reduces the mean squared error of the estimators in finite samples, and is superior to several popular trimming approaches over a wide range of settings.
    Keywords: Average treatment effect, Econometric evaluation, Penalizing, Propensity score, Shrinkage
    JEL: C14 C21
    Date: 2014–09
  12. By: Juan C. Arismendi (ICMA Centre, Henley Business School, University of Reading); Herbert Kimura
    Abstract: An algorithm to generate samples with approximate first-, second-, and third-order moments is presented extending the Cholesky matrix decomposition to a Cholesky tensor decomposition of an arbitrary order. The tensor decomposition of the first-, second-, and third-order objective moments generates a non-linear system of equations. The algorithm solves these equations by numerical methods. The results show that the optimisation algorithm delivers samples with an approximate error of 0.1%–4% between the components of the objective and the sample moments. An application for sensitivity analysis of portfolio risk assessment with Value-at-Risk VaR) is provided. A comparison with previous methods available in the literature suggests that methodology proposed reduces the error of the objective moments in the generated samples
    Keywords: Monte Carlo Simulation, Higher-order Moments, Exact Moments Simulation, Stress-testing
    JEL: C14 C15 G32
    Date: 2014–08
  13. By: Nicholas Longford
    Abstract: We show on an application to small-area statistics that efficient estimation is not always conducive to good policy decisions, because the established inferential procedures have no capacity to incorporate the priorities and preferences of the policy makers and the related consequences of incorrect decisions. A method that addresses these deficiencies is described. We argue that elicitation of the perspectives of the client (sponsor) and their quantification are essential elements of the analysis, because different estimators (decisions) are appropriate for different perspectives. An example of planning an intervention in a country’s districts with high rate of illiteracy is described. In the problem, the established small-area estimators perform poorly because the minimum mean squared error is an inappropriate criterion.
    Keywords: Composition; empirical Bayes; expected loss; borrowing strength; exploiting similarity; shrinkage; small-area estimation.
    Date: 2014–10
  14. By: João Barata R. B. Barroso
    Abstract: This paper proposes a novel orthogonality condition based on realized volatility that allows instrumental variable estimation of the effects of spot intervention in foreign exchange markets. We consider parametric and nonparametric instrumental variable estimation, and propose a test based on the average treatment effect of intervention. We apply the method to a unique dataset for the BRL/USD market with full records of spot intervention and net order flow intermediated by the financial system. Overall the average effect of a 1 billion USD sell or buy interventions are close to the 0.51% depreciation or appreciation, respectively, estimated in the linear framework, which is therefore robust to nonlinear interactions. The estimates are a bit lower controlling for derivative operations, which suggests the intervention policies (spot and swaps) are complementary
    Date: 2014–09
  15. By: John K. Dagsvik; Zhiyang Jia (Statistics Norway)
    Abstract: This paper discusses aspects of a framework for modeling labor supply where the notion of job choice is fundamental. In this framework, workers are assumed to have preferences over latent job opportunities belonging to worker-specific choice sets from which they choose their preferred job. The observed hours of work and wage is interpreted as the job-specific hours and wage of the chosen job. The main contribution of this paper is an analysis of the identification problem of this framework under various conditions, when conventional cross-section micro-data are applied. The modeling framework is applied to analyze labor supply behavior for married/cohabiting couples using Norwegian micro data. Specifically, we estimate two model versions with in the general framework. Based on the empirical results, we discuss further qualitative properties of the model versions. Finally, we apply the preferred model version to conduct a simulation experiment of a counterfactual policy reforms.
    Keywords: labor supply; non-pecuniary job attributes; latent choice sets; random utility models; identification
    JEL: J22 C51
    Date: 2014–09
  16. By: Marco Del Negro (Federal Reserve Bank of New York); Raiden B. Hasegawa (Wharton School, University of Pennsylvania); Frank Schorfheide (Department of Economics, University of Pennsylvania)
    Abstract: We provide a novel methodology for estimating time-varying weights in linear prediction pools, which we call Dynamic Pools, and use it to investigate the relative forecasting performance of DSGE models with and without financial frictions for output growth and inflation from 1992 to 2011. We find strong evidence of time variation in the pool's weights, reflecting the fact that the DSGE model with financial frictions produces superior forecasts in periods of financial distress but does not perform as well in tranquil periods. The dynamic pool's weights react in a timely fashion to changes in the environment, leading to real-time forecast improvements relative to other methods of density forecast combination, such as Bayesian Model Averaging, optimal (static) pools, and equal weights. We show how a policymaker dealing with model uncertainty could have used a dynamic pools to perform a counterfactual exercise (responding to the gap in labor market conditions) in the immediate aftermath of the Lehman crisis.
    Keywords: Bayesian estimation, DSGE Models, Financial Frictions, Forecasting, Great Recession, Linear Prediction Pools
    JEL: C53 E31 E32 E37
    Date: 2014–10–03

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.