nep-ecm New Economics Papers
on Econometrics
Issue of 2021‒07‒19
35 papers chosen by
Sune Karlsson
Örebro universitet

  1. Causal Inference with Corrupted Data: Measurement Error, Missing Values, Discretization, and Differential Privacy By Anish Agarwal; Rahul Singh
  2. Semiparametric Spatial Autoregressive Panel Data Model with Fixed Effects and Time-Varying Coefficients By Xuan, Liang; Jiti, Gao; xiaodong, Gong
  3. Generalized Covariance Estimator By Christian Gourieroux; Joann Jasiak
  4. Common and Idiosyncratic Conditional Volatility Factors: Theory and Empirical Evidence By Francisco Blasques; Enzo D'Innocenzo; Siem Jan Koopman
  5. Locally- but not globally-identified SVARs By Emanuele Bacchiocchi; Toru Kitagawa
  6. Inference and forecasting for continuous-time integer-valued trawl processes and their use in financial economics By Mikkel Bennedsen; Asger Lunde; Neil Shephard; Almut E. D. Veraart
  7. Identification of Average Marginal Effects in Fixed Effects Dynamic Discrete Choice Models By Victor Aguirregabiria; Jesus Carro
  8. MinP Score Tests with an Inequality Constrained Parameter Space By Giuseppe Cavaliere; Zeng-Hua Lu; Anders Rahbek; Yuhong Yang
  9. Estimation and Inference in Factor Copula Models with Exogenous Covariates By Alexander Mayer; Dominik Wied
  10. Inference for Low-Rank Models By Victor Chernozhukov; Christian Hansen; Yuan Liao; Yinchu Zhu
  11. Vector Autoregressions with Dynamic Factor Coefficients and Conditionally Heteroskedastic Errors By Paolo Gorgi; Siem Jan Koopman; Julia Schaumburg
  12. State Dependence and Unobserved Heterogeneity in the Extensive Margin of Trade By Julian Hinz; Amrei Stammann; Joschka Wanner
  13. Econometric Modeling of Interdependent Discrete Choice with Applications to Market Structure By Andrew Chesher; Adam Rosen
  14. Identification in simple binary outcome panel data models By Bo E. Honoré; Áureo de Paula
  15. Determining the rank of cointegration with infinite variance By Matteo Barigozzi; Giuseppe Cavaliere; Lorenzo Trapani
  16. A new class of conditional Markov jump processes with regime switching and path dependence: properties and maximum likelihood estimation By Budhi Surya
  17. Time Series Estimation of the Dynamic Effects of Disaster-Type Shock By Richard Davis; Serena Ng
  18. A Neural Frequency-Severity Model and Its Application to Insurance Claims By Dong-Young Lim
  19. Teaching statistical inference without normality By Hafner, Christian
  20. The Identification Region of the Potential Outcome Distributions under Instrument Independence By Toru Kitagawa
  21. True structure change, spurious treatment effect? A novel approach to disentangle treatment effects from structure changes By Hao, Shiming
  22. Difference-in-Differences with a Continuous Treatment By Brantly Callaway; Andrew Goodman-Bacon; Pedro H. C. Sant'Anna
  23. On Testing Equal Conditional Predictive Ability Under Measurement Error By Yannick Hoga; Timo Dimitriadis
  24. Testability of Reverse Causality without Exogeneous Variation By Christoph Breunig; Patrick Burauel
  25. Heterogeneous Treatment Effects in Regression Discontinuity Designs By \'Agoston Reguly
  26. Time Series Forecasting using a Mixture of Stationary and Nonstationary Predictors By Bodha Hannadige, Sium; Gao, Jiti; Silvapulle, Mervyn; Silvapulle, Param
  27. Concentration bounds for the empirical angular measure with statistical learning applications By Clémençon, Stéphan; Jalalzai, Hamid; Sabourin, Anne; Segers, Johan
  28. Gravity models of networks: integrating maximum-entropy and econometric approaches By Marzio Di Vece; Diego Garlaschelli; Tiziano Squartini
  29. Testing for more positive expectation dependence with application to model comparison By Denuit, Michel; Trufin, Julien; Verdebout, Thomas
  30. Permanent-Transitory decomposition of cointegrated time series via Dynamic Factor Models, with an application to commodity prices By Casoli, Chiara; Lucchetti, Riccardo (Jack)
  31. Multiplicative Error Models: 20 years on By Fabrizio Cipollini; Giampiero M. Gallo
  32. Discrete choice under risk with limited consideration By Levon Barseghyan; Francesca Molinari; Matthew Thirkettle
  33. When to adjust alpha during multiple testing: A consideration of disjunction, conjunction, and individual testing. By Rubin, Mark
  34. A Note on the GRS Test By Mark Kamstra; Ruoyao Shi
  35. Estimating discrete choice experiments : theoretical fundamentals By Benoit Chèze; Charles Collet; Anthony Paris

  1. By: Anish Agarwal; Rahul Singh
    Abstract: Even the most carefully curated economic data sets have variables that are noisy, missing, discretized, or privatized. The standard workflow for empirical research involves data cleaning followed by data analysis that typically ignores the bias and variance consequences of data cleaning. We formulate a semiparametric model for causal inference with corrupted data to encompass both data cleaning and data analysis. We propose a new end-to-end procedure for data cleaning, estimation, and inference with data cleaning-adjusted confidence intervals. We prove root-n consistency, Gaussian approximation, and semiparametric efficiency for our estimator of the causal parameter by finite sample arguments. Our key assumption is that the true covariates are approximately low rank. In our analysis, we provide nonasymptotic theoretical contributions to matrix completion, statistical learning, and semiparametric statistics. We verify the coverage of the data cleaning-adjusted confidence intervals in simulations.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.02780&r=
  2. By: Xuan, Liang; Jiti, Gao; xiaodong, Gong
    Abstract: This paper develops a time--varying coefficient spatial autoregressive panel data model with individual fixed effects to capture the nonlinear effects of the regressors, which vary over the time. To effectively estimate the model, we propose a method that incorporates local linear estimation and concentrated quasi-maximum likelihood estimation to obtain consistent estimators for the spatial autoregressive coefficient, variance of error term and nonparametric time-varying coefficient function. The asymptotic properties of these estimators are derived as well, showing regular the standard rate of convergence for the parametric parameters and common standard rate of convergence for the time-varying component, respectively. Monte Carlo simulations are conducted to illustrate the finite sample performance of our proposed method. Meanwhile, we apply our method to study the Chinese labor productivity to identify the spatial influences and the time--varying spillover effects among 185 Chinese cities with comparison to the results on a subregion--East China.
    Keywords: Concentrated quasi-maximum likelihood estimation, local linear estimation, time-varying coefficient
    JEL: C1 C14 C3 C33
    Date: 2021–01–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:108497&r=
  3. By: Christian Gourieroux; Joann Jasiak
    Abstract: We consider a class of semi-parametric dynamic models with strong white noise errors. This class of processes includes the standard Vector Autoregressive (VAR) model, the nonfundamental structural VAR, the mixed causal-noncausal models, as well as nonlinear dynamic models such as the (multivariate) ARCH-M model. For estimation of processes in this class, we propose the Generalized Covariance (GCov) estimator, which is obtained by minimizing a residual-based multivariate portmanteau statistic as an alternative to the Generalized Method of Moments. We derive the asymptotic properties of the GCov estimator and of the associated residual-based portmanteau statistic. Moreover, we show that the GCov estimators are semi-parametrically efficient and the residual-based portmanteau statistics are asymptotically chi-square distributed. The finite sample performance of the GCov estimator is illustrated in a simulation study. The estimator is also applied to a dynamic model of cryptocurrency prices.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.06979&r=
  4. By: Francisco Blasques (Vrije Universiteit Amsterdam); Enzo D'Innocenzo (University of Bologna); Siem Jan Koopman (Vrije Universiteit Amsterdam)
    Abstract: We propose a multiplicative dynamic factor structure for the conditional modelling of the variances of an N-dimensional vector of financial returns. We identify common and idiosyncratic conditional volatility factors. The econometric framework is based on an observation-driven time series model that is simple and parsimonious. The common factor is modeled by a normal density and is robust to fat-tailed returns as it averages information over the cross-section of the observed N-dimensional vector of returns. The idiosyncratic factors are designed to capture the erratic shocks in returns and therefore rely on fat-tailed densities. Our model is potentially of a high-dimension, is parsimonious and it does not necessarily suffer from the curse of dimensionality. The relatively simple structure of the model leads to simple computations for the estimation of parameters and signal extraction of factors. We derive the stochastic properties of our proposed dynamic factor model, including bounded moments, stationarity, ergodicity, and filter invertibility. We further establish consistency and asymptotic normality of the maximum likelihood estimator. The finite sample properties of the estimator and the reliability of our method to track the common conditional volatility factor are investigated by means of a Monte Carlo study. Finally, we illustrate our approach with two empirical studies. The first study is for a panel of financial returns from ten stocks of the S&P100. The second study is for the panel of returns from all S&P100 stocks.
    Keywords: Financial econometrics, observation-driven models, conditional volatility, common factor
    JEL: C32 C52 C58
    Date: 2021–06–28
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20210057&r=
  5. By: Emanuele Bacchiocchi (Institute for Fiscal Studies); Toru Kitagawa (Institute for Fiscal Studies and cemmap and University College London)
    Abstract: This paper analyzes Structural Vector Autoregressions (SVARs) where identification of structural parameters holds locally but not globally. In this case there exists a set of isolated structural parameter points that are observationally equivalent under the imposed restrictions. Although the data do not inform us which observationally equivalent point should be selected, the common frequentist practice is to obtain one as a maximum likelihood estimate and perform impulse response analysis accordingly. For Bayesians, the lack of global identification translates to nonvanishing sensitivity of the posterior to the prior, and the multi-modal likelihood gives rise to computational challenges as posterior sampling algorithms can fail to explore all the modes. This paper overcomes these challenges by proposing novel estimation and inference procedures. We characterize a class of identifying restrictions that deliver local but non-global identification, and the resulting number of observationally equivalent parameter values. We propose algorithms to exhaustively compute all admissible structural parameter given reduced-form parameters and utilize them to sampling from the multi-modal posterior. In addition, viewing the set of observationally equivalent parameter points as the identified set, we develop Bayesian and frequentist procedures for inference on the corresponding set of impulse responses. An empirical example illustrates our proposal.
    Date: 2020–07–27
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:40/20&r=
  6. By: Mikkel Bennedsen; Asger Lunde; Neil Shephard; Almut E. D. Veraart
    Abstract: This paper develops likelihood-based methods for estimation, inference, model selection, and forecasting of continuous-time integer-valued trawl processes. The full likelihood of integer-valued trawl processes is, in general, highly intractable, motivating the use of composite likelihood methods, where we consider the pairwise likelihood in lieu of the full likelihood. Maximizing the pairwise likelihood of the data yields an estimator of the parameter vector of the model, and we prove consistency and asymptotic normality of this estimator. The same methods allow us to develop probabilistic forecasting methods, which can be used to construct the predictive distribution of integer-valued time series. In a simulation study, we document good finite sample performance of the likelihood-based estimator and the associated model selection procedure. Lastly, the methods are illustrated in an application to modelling and forecasting financial bid-ask spread data, where we find that it is beneficial to carefully model both the marginal distribution and the autocorrelation structure of the data. We argue that integer-valued trawl processes are especially well-suited in such situations.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.03674&r=
  7. By: Victor Aguirregabiria; Jesus Carro
    Abstract: In nonlinear panel data models, fixed effects methods are often criticized because they cannot identify average marginal effects (AMEs) in short panels. The common argument is that the identification of AMEs requires knowledge of the distribution of unobserved heterogeneity, but this distribution is not identified in a fixed effects model with a short panel. In this paper, we derive identification results that contradict this argument. In a panel data dynamic logic model, and for T as small as four, we prove the point identification of different AMEs, including causal effects of changes in the lagged dependent variable or in the duration in last choice. Our proofs are constructive and provide simple closed-form expressions for the AMEs in terms of probabilities of choice histories. We illustrate our results using Monte Carlo experiments and with an empirical application of a dynamic structural model of consumer brand choice with state dependence.
    Keywords: Identification; Average marginal effects; Fixed effects models; Panel data; Dynamic discrete choice; State dependence; Dynamic demand of differentiated products
    JEL: C23 C25 C51
    Date: 2021–07–08
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-701&r=
  8. By: Giuseppe Cavaliere; Zeng-Hua Lu; Anders Rahbek; Yuhong Yang
    Abstract: Score tests have the advantage of requiring estimation alone of the model restricted by the null hypothesis, which often is much simpler than models defined under the alternative hypothesis. This is typically so when the alternative hypothesis involves inequality constraints. However, existing score tests address only jointly testing all parameters of interest; a leading example is testing all ARCH parameters or variances of random coefficients being zero or not. In such testing problems rejection of the null hypothesis does not provide evidence on rejection of specific elements of parameter of interest. This paper proposes a class of one-sided score tests for testing a model parameter that is subject to inequality constraints. Proposed tests are constructed based on the minimum of a set of $p$-values. The minimand includes the $p$-values for testing individual elements of parameter of interest using individual scores. It may be extended to include a $p$-value of existing score tests. We show that our tests perform better than/or perform as good as existing score tests in terms of joint testing, and has furthermore the added benefit of allowing for simultaneously testing individual elements of parameter of interest. The added benefit is appealing in the sense that it can identify a model without estimating it. We illustrate our tests in linear regression models, ARCH and random coefficient models. A detailed simulation study is provided to examine the finite sample performance of the proposed tests and we find that our tests perform well as expected.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.06089&r=
  9. By: Alexander Mayer; Dominik Wied
    Abstract: A factor copula model is proposed in which factors are either simulable or estimable from exogenous information. Point estimation and inference are based on a simulated methods of moments (SMM) approach with non-overlapping simulation draws. Consistency and limiting normality of the estimator is established and the validity of bootstrap standard errors is shown. Doing so, previous results from the literature are verified under low-level conditions imposed on the individual components of the factor structure. Monte Carlo evidence confirms the accuracy of the asymptotic theory in finite samples and an empirical application illustrates the usefulness of the model to explain the cross-sectional dependence between stock returns.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.03366&r=
  10. By: Victor Chernozhukov; Christian Hansen; Yuan Liao; Yinchu Zhu
    Abstract: This paper studies inference in linear models whose parameter of interest is a high-dimensional matrix. We focus on the case where the high-dimensional matrix parameter is well-approximated by a ``spiked low-rank matrix'' whose rank grows slowly compared to its dimensions and whose nonzero singular values diverge to infinity. We show that this framework covers a broad class of models of latent-variables which can accommodate matrix completion problems, factor models, varying coefficient models, principal components analysis with missing data, and heterogeneous treatment effects. For inference, we propose a new ``rotation-debiasing" method for product parameters initially estimated using nuclear norm penalization. We present general high-level results under which our procedure provides asymptotically normal estimators. We then present low-level conditions under which we verify the high-level conditions in a treatment effects example.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.02602&r=
  11. By: Paolo Gorgi (Vrije Universiteit Amsterdam); Siem Jan Koopman (Vrije Universiteit Amsterdam); Julia Schaumburg (Vrije Universiteit Amsterdam)
    Abstract: We introduce a new and general methodology for analyzing vector autoregressive models with time-varying coefficient matrices and conditionally heteroskedastic disturbances. Our proposed method is able to jointly treat a dynamic latent factor model for the autoregressive coefficient matrices and a multivariate dynamic volatility model for the variance matrix of the disturbance vector. Since the likelihood function is available in closed-form through a simple extension of the Kalman filter equations, all unknown parameters in this flexible model can be easily estimated by the method of maximum likelihood. The proposed approach is appealing since it is simple to implement and computationally fast. Furthermore, it presents an alternative to Bayesian methods which are regularly employed in the empirical literature. A simulation study shows the reliability and robustness of the method against potential misspecifications of the volatility in the disturbance vector. We further provide an empirical illustration in which we analyze possibly time-varying relationships between U.S. industrial production, inflation, and bond spread. We empirically identify a time-varying linkage between economic and financial variables which are effectively described by a common dynamic factor. The impulse response analysis points towards substantial differences in the effects of financial shocks on output and inflation during crisis and non-crisis periods.
    Keywords: time-varying parametersvector autoregressive model, dynamic factor model, Kalman filter, generalized autoregressive conditional heteroskedasticity, orthogonal impulse response function
    JEL: C32 E31
    Date: 2021–06–28
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20210056&r=
  12. By: Julian Hinz (Bielefeld University, Kiel Institute for the World Economy, Kiel Centre for Globalization); Amrei Stammann (Ruhr-University Bochum); Joschka Wanner (University of Potsdam, Kiel Institute for the World Economy)
    Abstract: We study the role and drivers of persistence in the extensive margin of bilateral trade. Motivated by a stylized heterogeneous firms model of international trade with market entry costs, we consider dynamic three-way fixed effects binary choice models and study the corresponding incidental parameter problem. The standard maximum likelihood estimator is consistent under asymptotics where all panel dimensions grow at a constant rate, but it has an asymptotic bias in its limiting distribution, invalidating inference even in situations where the bias appears to be small. Thus, we propose two different bias-corrected estimators. Monte Carlo simulations confirm their desirable statistical properties. We apply these estimators in a reassessment of the most commonly studied determinants of the extensive margin of trade. Both true state dependence and unobserved heterogeneity contribute considerably to trade persistence and taking this persistence into account matters significantly in identifying the effects of trade policies on the extensive margin.
    Keywords: dynamic binary choice, extensive margin, high-dimensional fixed effects, incidental parameter bias correction, trade policy
    JEL: C13 C23 C55 F14 F15
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:pot:cepadp:36&r=
  13. By: Andrew Chesher (Institute for Fiscal Studies and University College London); Adam Rosen (Institute for Fiscal Studies and Duke University)
    Abstract: This paper demonstrates the use of bounds analysis for empirical models of market structure that allow for multiple equilibria. From an econometric standpoint, these models feature systems of equalities and inequalities for the determination of multiple endogenous interdependent discrete choice variables. These models may be incomplete, delivering multiple values of outcomes at certain values of the latent variables and covariates, and incoherent, delivering no values. Alternative approaches to accommodating incompleteness and incoherence are considered in a unifying framework afforded by the Generalized Instrumental Variable models introduced in Chesher and Rosen (2017). Sharp identication regions for parameters of interest defined by systems of conditional moment equalities and inequalities are provided. Almost all empirical analysis of interdependent discrete choice uses models that include parametric specifications of the distribution of unobserved variables. The paper provides characterizations of identified sets and outer regions for structural functions and parameters allowing for any distribution of unobservables independent of exogenous variables. The methods are applied to the models and data of Mazzeo (2002) and Kline and Tamer (2016) in order to study the sensitivity of empirical results to restrictions on equilibrium selection and the distribution of unobservable payoff shifters, respectively. Confidence intervals for individual parameter components are provided using a recently developed inference approach from Belloni, Bugni, and Chernozhukov (2018). The relaxation of equilibrium selection and distributional restrictions in these applications is found to greatly increase the width of resulting confidence intervals, but nonetheless the models continue to sign strategic interaction parameters.
    Date: 2020–06–01
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:25/20&r=
  14. By: Bo E. Honoré (Institute for Fiscal Studies and Princeton); Áureo de Paula (Institute for Fiscal Studies and University College London)
    Abstract: This paper ?rst reviews some of the approaches that have been taken to estimate the common parameters of binary outcome models with ?xed e?ects. We limit attention to situations in which the researcher has access to a data set with a large number of units (individuals or ?rms, for example) observed over a few time periods. We then apply some of the existing approaches to study ?xed e?ects panel data versions of entry games, like the ones studied in Bresnahan and Reiss (1991) and Tamer (2003).
    Date: 2021–03–19
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:14/21&r=
  15. By: Matteo Barigozzi; Giuseppe Cavaliere; Lorenzo Trapani
    Abstract: We study the issue of determining the rank of cointegration, R, in an N-variate time series yt, allowing for the possible presence of heavy tails. Our methodology does not require any estimation of nuisance parameters such as the tail index, and indeed even knowledge as to whether certain moments (such as the variance) exist or not is not required. Our estimator of the rank is based on a sequence of tests on the eigenvalues of the sample second moment matrix of yt. We derive the rates of such eigenvalues, showing that these do depend on the tail index, but also that there exists a gap in rates between the first N - R and the remaining eigenvalues. The former ones, in particular, diverge at a rate which is faster than the latter ones by a factor T (where T denotes the sample size), irrespective of the tail index. We thus exploit this eigen-gap by constructing, for each eigenvalue, a test statistic which diverges to positive infinity or drifts to zero according as the relevant eigenvalue belongs in the set of first N - R eigenvalues or not. We then construct a randomised statistic based on this, using it as part of a sequential testing procedure. The resulting estimator of R is consistent, in that it picks the true value R with probability 1 as the sample size passes to infinity.
    Keywords: cointegration, heavy tails, randomized tests.
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:not:notgts:20/01&r=
  16. By: Budhi Surya
    Abstract: This paper develops a new class of conditional Markov jump processes with regime switching and paths dependence. The key novel feature of the developed process lies on its ability to switch the transition rate as it moves from one state to another with switching probability depending on the current state and time of the process as well as its past trajectories. As such, the transition from current state to another depends on the holding time of the process in the state. Distributional properties of the process are given explicitly in terms of the speed regimes represented by a finite number of different transition matrices, the probabilities of selecting regime membership within each state, and past realization of the process. In particular, it has distributional equivalent stochastic representation with a general mixture of Markov jump processes introduced in Frydman and Surya (2020). Maximum likelihood estimates (MLE) of the distribution parameters of the process are derived in closed form. The estimation is done iteratively using the EM algorithm. Akaike information criterion is used to assess the goodness-of-fit of the selected model. An explicit observed Fisher information matrix of the MLE is derived for the calculation of standard errors of the MLE. The information matrix takes on a simplified form of the general matrix formula of Louis (1982). Large sample properties of the MLE are presented. In particular, the covariance matrix for the MLE of transition rates is equal to the Cram\'er-Rao lower bound, and is less for the MLE of regime membership. The simulation study confirms these findings and shows that the parameter estimates are accurate, consistent, and have asymptotic normality as the sample size increases.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.07026&r=
  17. By: Richard Davis; Serena Ng
    Abstract: The paper provides three results for SVARs under the assumption that the primitive shocks are mutually independent. First, a framework is proposed to study the dynamic effects of disaster-type shocks with infinite variance. We show that the least squares estimates of the VAR are consistent but have non-standard properties. Second, it is shown that the restrictions imposed on a SVAR can be validated by testing independence of the identified shocks. The test can be applied whether the data have fat or thin tails, and to over as well as exactly identified models. Third, the disaster shock is identified as the component with the largest kurtosis, where the mutually independent components are estimated using an estimator that is valid even in the presence of an infinite variance shock. Two applications are considered. In the first, the independence test is used to shed light on the conflicting evidence regarding the role of uncertainty in economic fluctuations. In the second, disaster shocks are shown to have short term economic impact arising mostly from feedback dynamics.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.06663&r=
  18. By: Dong-Young Lim
    Abstract: This paper proposes a flexible and analytically tractable class of frequency-severity models based on neural networks to parsimoniously capture important empirical observations. In the proposed two-part model, mean functions of frequency and severity distributions are characterized by neural networks to incorporate the non-linearity of input variables. Furthermore, it is assumed that the mean function of the severity distribution is an affine function of the frequency variable to account for a potential linkage between frequency and severity. We provide explicit closed-form formulas for the mean and variance of the aggregate loss within our modelling framework. Components of the proposed model including parameters of neural networks and distribution parameters can be estimated by minimizing the associated negative log-likelihood functionals with neural network architectures. Furthermore, we leverage the Shapely value and recent developments in machine learning to interpret the outputs of the model. Applications to a synthetic dataset and insurance claims data illustrate that our method outperforms the existing methods in terms of interpretability and predictive accuracy.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.10770&r=
  19. By: Hafner, Christian (Université catholique de Louvain, LIDAM/ISBA, Belgium)
    Abstract: Teaching undergraduate statistics has long been dominated by a parallel ap- proach, where large and small sample inference is taught side-by-side. Necessarily, the set of assumptions for the two cases is different, with normality and homoskedas- ticity being the main ingredients for the popular t- and F-tests in small samples. This paper advocates a change of paradigm and the exclusive presentation of large sample inference in introductory classes, for three reasons: First, it weakens and simplifies the assumptions. Second, it reduces the number of sampling distributions and therefore simplifies the presentation. Third, it may give better performance in many small sample situations where the assumptions such as normality are violated. The detection of these violations in small samples is inherently difficult due to low power of normality or homoskedasticity tests. Many numerical examples are given. In the era of big data, it is anachronistic to deal with small sample inference in introductory statistics classes, and this paper makes the case for a change.
    Keywords: sampling distributions ; simplification ; big data
    Date: 2021–01–01
    URL: http://d.repec.org/n?u=RePEc:aiz:louvad:2021027&r=
  20. By: Toru Kitagawa (Institute for Fiscal Studies and cemmap and University College London)
    Abstract: This paper examines the identifying power of instrument exogeneity in the treatment effect model. We derive the identification region of the potential outcome distributions, which are the collection of distributions that are compatible with data and with the restrictions of the model. We consider identification when (i) the instrument is independent of each of the potential outcomes (marginal independence), (ii) the instrument is independent of each of the potential outcomes jointly (joint independence), and (iii.) the instrument is independent of each of the potential outcomes jointly and is monotonic (the LATE restriction). By comparing the size of the identification region under each restriction, we show that joint independence provides more identifying information for the potential outcome distributions than marginal independence, but that the LATE restriction provides no identification gain beyond joint independence. We also, under each restriction, derive sharp bounds for the Average Treatment Effect and sharp testable implication to falsify the restriction. Our analysis covers discrete and continuous outcome cases, and extends the Average Treatment Effect bounds of Balke and Pearl (1997) developed for the dichotomous outcome case to a more general setting.
    Date: 2020–05–21
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:23/20&r=
  21. By: Hao, Shiming
    Abstract: This paper develops a new flexible approach to disentangle treatment effects from structure changes. It is shown that ignoring prior structure changes or endogenous regime switches in causal inferences will lead to false positive or false negative treatment effects estimations. A difference in difference in difference strategy and a novel approach based on Automatically Auxiliary Regressions (AARs) are designed to separately identify and estimate treatment effects, structure changes effects and endogenous regime switch effects. The new approach has several desirable features. First, it does not need instrument variables to handle endogeneities and it is easy to implement with hardly any technical barriers to the empirical researchers; second, it can be extended to isolate one treatment from other treatments when the outcome is the working of a series of treatments; third, it outperforms other popular competitors in small sample simulations and the biases caused by endogeneities vanish with sample size. The new method is illustrated then in a comparative study of supporting direct destruction theory on the impacts of Hanshin-Awaji earthquake and Schumpeterian creative destruction theory on the impacts of Wenchuan earthquake.
    Keywords: structure changes; treatment effects; latent variable; endogeneity; regime switch model; social interactions
    JEL: C10 C13 C22
    Date: 2021–07–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:108679&r=
  22. By: Brantly Callaway; Andrew Goodman-Bacon; Pedro H. C. Sant'Anna
    Abstract: This paper analyzes difference-in-differences setups with a continuous treatment. We show that treatment effect on the treated-type parameters can be identified under a generalized parallel trends assumption that is similar to the binary treatment setup. However, interpreting differences in these parameters across different values of the treatment can be particularly challenging due to treatment effect heterogeneity. We discuss alternative, typically stronger, assumptions that alleviate these challenges. We also provide a variety of treatment effect decomposition results, highlighting that parameters associated with popular two-way fixed-effect specifications can be hard to interpret, even when there are only two time periods. We introduce alternative estimation strategies that do not suffer from these drawbacks. Our results also cover cases where (i) there is no available untreated comparison group and (ii) there are multiple periods and variation in treatment timing, which are both common in empirical work.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.02637&r=
  23. By: Yannick Hoga; Timo Dimitriadis
    Abstract: Loss functions are widely used to compare several competing forecasts. However, forecast comparisons are often based on mismeasured proxy variables for the true target. We introduce the concept of exact robustness to measurement error for loss functions and fully characterize this class of loss functions as the Bregman class. For such exactly robust loss functions, forecast loss differences are on average unaffected by the use of proxy variables and, thus, inference on conditional predictive ability can be carried out as usual. Moreover, we show that more precise proxies give predictive ability tests higher power in discriminating between competing forecasts. Simulations illustrate the different behavior of exactly robust and non-robust loss functions. An empirical application to US GDP growth rates demonstrates that it is easier to discriminate between forecasts issued at different horizons if a better proxy for GDP growth is used.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.11104&r=
  24. By: Christoph Breunig; Patrick Burauel
    Abstract: This paper shows that testability of reverse causality is possible even in the absence of exogenous variation, such as in the form of instrumental variables. Instead of relying on exogenous variation, we achieve testability by imposing relatively weak model restrictions. Our main assumption is that the true functional relationship is nonlinear and error terms are additively separable. In contrast to existing literature, we allow the error to be heteroskedastic, which is the case in most economic applications. Our procedure builds on reproducing kernel Hilbert space (RKHS) embeddings of probability distributions to test conditional independence. We show that the procedure provides a powerful tool to detect the causal direction in both Monte Carlo simulations and an application to German survey data. We can infer the causal direction between income and work experience (proxied by age) without relying on exogeneous variation.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.05936&r=
  25. By: \'Agoston Reguly
    Abstract: The paper proposes a supervised machine learning algorithm to uncover treatment effect heterogeneity in classical regression discontinuity (RD) designs. Extending Athey and Imbens (2016), I develop a criterion for building an honest ``regression discontinuity tree'', where each leaf of the tree contains the RD estimate of a treatment (assigned by a common cutoff rule) conditional on the values of some pre-treatment covariates. It is a priori unknown which covariates are relevant for capturing treatment effect heterogeneity, and it is the task of the algorithm to discover them, without invalidating inference. I study the performance of the method through Monte Carlo simulations and apply it to the data set compiled by Pop-Eleches and Urquiola (2013) to uncover various sources of heterogeneity in the impact of attending a better secondary school in Romania.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.11640&r=
  26. By: Bodha Hannadige, Sium; Gao, Jiti; Silvapulle, Mervyn; Silvapulle, Param
    Abstract: We develop a method for constructing prediction intervals for a nonstationary variable, such as GDP. The method uses a factor augmented regression [FAR] model. The predictors in the model includes a small number of factors generated to extract most of the information in a set of panel data on a large number of macroeconomic variables considered to be potential predictors. The novelty of this paper is that it provides a method and justification for a mixture of stationary and nonstationary factors as predictors in the FAR model; we refer to this as mixture-FAR method. This method is important because typically such a large set of panel data, for example the FRED-MD, is likely to contain a mixture of stationary and nonstationary variables. In our simulation study, we observed that the proposed mixture-FAR method performed better than its competitor that requires all the predictors to be nonstationary; the MSE of prediction was at least 33% lower for mixture-FAR. Using the data in FRED-QD for the US, we evaluated the aforementioned methods for forecasting the nonstationary variables, GDP and Industrial Production. We observed that the mixture-FAR method performed better than its competitors.
    Keywords: Gross domestic product; high dimensional data; industrial production; macroeconomic forecasting; panel data
    JEL: C13 C3 C32 C33
    Date: 2021–01–30
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:108669&r=
  27. By: Clémençon, Stéphan (Télécom Paris, France); Jalalzai, Hamid (Télécom Paris, France); Sabourin, Anne (Télécom Paris, France); Segers, Johan (Université catholique de Louvain, LIDAM/ISBA, Belgium)
    Abstract: The angular measure on the unit sphere characterizes the first-order dependence structure of the components of a random vector in extreme regions and is defined in terms of standardized margins. Its statistical recovery is an important step in learning problems involving observations far away from the center. In the common situation when the components of the vector have different distributions, the rank transformation offers a convenient and robust way of standardizing data in order to build an empirical version of the angular measure based on the most extreme observations. However, the study of the sampling distribution of the resulting empirical angular measure is challenging. It is the purpose of the paper to establish finite-sample bounds for the maximal deviations between the empirical and true angular measures, uniformly over classes of Borel sets of controlled combinatorial complexity. The bounds are valid with high probability and scale essentially as the square root of the effective sample size, up to a logarithmic factor. Discarding the most extreme observations yields a truncated version of the empirical angular measure for which the logarithmic factor in the concentration bound is replaced by a factor depending on the truncation level. The bounds are applied to provide performance guarantees for two statistical learning procedures tailored to extreme regions of the input space and built upon the empirical angular measure: binary classification in extreme regions through empirical risk minimization and unsupervised anomaly detection through minimum-volume sets of the sphere.
    Keywords: angular measure ; classiffication ; concentration inequality ; extreme value statistics ; extreme value statistics ; minimum-volume sets ; ranks
    Date: 2021–01–01
    URL: http://d.repec.org/n?u=RePEc:aiz:louvad:2021023&r=
  28. By: Marzio Di Vece; Diego Garlaschelli; Tiziano Squartini
    Abstract: The World Trade Web (WTW) is the network of international trade relationships among world countries. Characterizing both the local link weights (observed trade volumes) and the global network structure (large-scale topology) of the WTW via a single model is still an open issue. While the traditional Gravity Model (GM) successfully replicates the observed trade volumes by employing macroeconomic properties such as GDP and geographic distance, it, unfortunately, predicts a fully connected network, thus returning a completely unrealistic topology of the WTW. To overcome this problem, two different classes of models have been introduced in econometrics and statistical physics. Econometric approaches interpret the traditional GM as the expected value of a probability distribution that can be chosen arbitrarily and tested against alternative distributions. Statistical physics approaches construct maximum-entropy probability distributions of (weighted) graphs from a chosen set of measurable structural constraints and test distributions resulting from different constraints. Here we compare and integrate the two approaches by considering a class of maximum-entropy models that can incorporate macroeconomic properties used in standard econometric models. We find that the integrated approach achieves a better performance than the purely econometric one. These results suggest that the maximum-entropy construction can serve as a viable econometric framework wherein extensive and intensive margins can be separately controlled for, by combining topological constraints and dyadic macroeconomic variables.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.02650&r=
  29. By: Denuit, Michel (Université catholique de Louvain, LIDAM/ISBA, Belgium); Trufin, Julien (ULB); Verdebout, Thomas (ULB)
    Abstract: Modern data science tools are effective to produce predictions that strongly correlate with responses. Model comparison can therefore be based on the strength of dependence between responses and their predictions. Positive expectation dependence turns out to be attractive in that respect. The present paper proposes an effective testing procedure for this dependence concept and applies it to model selection. A simulation study is performed to evaluate the performances of the proposed testing procedure. Empirical illustrations using insurance loss data demonstrate the relevance of the approach for model selection in supervised learning. The most positively expectation dependent predictor can then be autocalibrated to obtain its balance-corrected version that appears to be optimal with respect to Bregman, or forecast dominance.
    Keywords: Expectation dependence ; concentration curve ; Lorenz curve ; autocalibration ; convex order ; balance correction
    Date: 2021–01–01
    URL: http://d.repec.org/n?u=RePEc:aiz:louvad:2021021&r=
  30. By: Casoli, Chiara; Lucchetti, Riccardo (Jack)
    Abstract: In this article, we propose a cointegration-based Permanent-Transitory decomposition for non-stationary Dynamic Factor Models. Our methodology exploits the cointegration relations among the observable variables and assumes they are driven by a common and an idiosyncratic component. The common component is further split into a long-term non-stationary part and a short-term stationary one. A Monte Carlo experiment shows that taking into account the cointegration structure in the DFM leads to a much better reconstruction of the space spanned by the factors, with respect to the most standard technique of applying a factor model in differenced systems. Finally, an application of our procedure to a set of different commodity prices allows to analyse the comovement among different markets. We find that commodity prices move together due to long-term common forces and that the trend for most primary good prices is declining, whereas metals and energy ones exhibit an upward or at least stable pattern since the 2000s.
    Keywords: Demand and Price Analysis
    Date: 2021–07–13
    URL: http://d.repec.org/n?u=RePEc:ags:feemwp:312367&r=
  31. By: Fabrizio Cipollini; Giampiero M. Gallo
    Abstract: Several phenomena are available representing market activity: volumes, number of trades, durations between trades or quotes, volatility - however measured - all share the feature to be represented as positive valued time series. When modeled, persistence in their behavior and reaction to new information suggested to adopt an autoregressive-type framework. The Multiplicative Error Model (MEM) is borne of an extension of the popular GARCH approach for modeling and forecasting conditional volatility of asset returns. It is obtained by multiplicatively combining the conditional expectation of a process (deterministically dependent upon an information set at a previous time period) with a random disturbance representing unpredictable news: MEMs have proved to parsimoniously achieve their task of producing good performing forecasts. In this paper we discuss various aspects of model specification and inference both for the univariate and the multivariate case. The applications are illustrative examples of how the presence of a slow moving low-frequency component can improve the properties of the estimated models.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.05923&r=
  32. By: Levon Barseghyan (Institute for Fiscal Studies); Francesca Molinari (Institute for Fiscal Studies and Cornell University); Matthew Thirkettle (Institute for Fiscal Studies)
    Abstract: This paper is concerned with learning decision makers’ preferences using data on observed choices from a ?nite set of risky alternatives. We propose a discrete choice model with unobserved heterogeneity in consideration sets and in standard risk aversion. We obtain su?cient conditions for the model’s semi-nonparametric point identi?cation, including in cases where consideration depends on preferences and on some of the exogenous variables. Our method yields an estimator that is easy to compute and is applicable in markets with large choice sets. We illustrate its properties using a dataset on property insurance purchases.
    Date: 2020–06–24
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:28/20&r=
  33. By: Rubin, Mark (The University of Newcastle, Australia)
    Abstract: Scientists often adjust their significance threshold (alpha level) during null hypothesis significance testing in order to take into account multiple testing and multiple comparisons. This alpha adjustment has become particularly relevant in the context of the replication crisis in science. The present article considers the conditions in which this alpha adjustment is appropriate and the conditions in which it is inappropriate. A distinction is drawn between three types of multiple testing: disjunction testing, conjunction testing, and individual testing. It is argued that alpha adjustment is only appropriate in the case of disjunction testing, in which at least one test result must be significant in order to reject the associated joint null hypothesis. Alpha adjustment is inappropriate in the case of conjunction testing, in which all relevant results must be significant in order to reject the joint null hypothesis. Alpha adjustment is also inappropriate in the case of individual testing, in which each individual result must be significant in order to reject each associated individual null hypothesis. The conditions under which each of these three types of multiple testing is warranted are examined. It is concluded that researchers should not automatically (mindlessly) assume that alpha adjustment is necessary during multiple testing. Illustrations are provided in relation to joint studywise hypotheses and joint multiway ANOVAwise hypotheses.
    Date: 2021–07–06
    URL: http://d.repec.org/n?u=RePEc:osf:metaar:tj6pm&r=
  34. By: Mark Kamstra (Schulich School of Business, York University); Ruoyao Shi (Department of Economics, University of California Riverside)
    Abstract: We clear up an ambiguity in Gibbons, Ross and Shanken (1989, GRS hereafter) by providing the correct formula of the GRS test statistic and proving its exact F distribution in the general multiple portfolio case. We generalize the Sharpe ratio based interpretation of the GRS test to the multiple portfolio case, which we argue paradoxically makes experts in asset pricing studies more susceptible to an incorrect formula. We theoretically and empirically illustrate the consequences of using the incorrect formula -- over-rejecting and mis-ranking asset pricing models.
    Keywords: GRS test, asset pricing, CAPM, multivariate test, portfolio efficiency, Sharpe ratio, over-rejection, model ranking
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:202111&r=
  35. By: Benoit Chèze (IFPEN - IFP Energies nouvelles - IFPEN - IFP Energies nouvelles, EconomiX-CNRS, University of Paris); Charles Collet (CIRED-CNRS); Anthony Paris (EconomiX-CNRS, University of Paris, LEO - Laboratoire d'Économie d'Orleans - UO - Université d'Orléans - Université de Tours - CNRS - Centre National de la Recherche Scientifique)
    Abstract: This working paper overviews theoretical foundations and estimators derived from econometric models used to analyze stated choices proposed in Discrete Choice Experiment (DCE)surveys. Discrete Choice Modelling is adapted to the case where the variable to be explained is a qualitative variable which cannot be ranked in relation to each other. A situation which occurs in many cases in everyday life as people often have to choose only one alternative among a proposed set of different ones in many fields (early in the morning, just think about how you pick clothes for instance). DCE is a Stated Preference method in which preferences are elicited through repeated fictional choices, proposed to a sample of respondents. Compared to Revealed Preference methods, DCEs allow for an ex ante evaluation of public policies that do not yet exists.
    Keywords: Revealed preference theory,Stated Preference / Stated Choice methods,Discrete Choice Modelling,Discrete Choice Experiment
    Date: 2021–04–12
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03262187&r=

This nep-ecm issue is ©2021 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.