nep-ecm New Economics Papers
on Econometrics
Issue of 2021‒01‒25
twenty-six papers chosen by
Sune Karlsson
Örebro universitet

  1. Robust Estimation of Probit Models with Endogeneity By Andrea A. Naghi; Máté Váradi; Mikhail Zhelonkin
  2. Fast and accurate variational inference for large Bayesian VARs with stochastic volatility By Joshua C.C. Chan; Xuewen Yu
  3. The Causal Learning of Retail Delinquency By Yiyan Huang; Cheuk Hang Leung; Xing Yan; Qi Wu; Nanbo Peng; Dongdong Wang; Zhixiang Huang
  4. Time-Varying Mixture Copula Models with Copula Selection By Bingduo Yang; Zongwu Cai; Christian M. Hafner; Guannan Liu
  5. Full-Information Estimation of Heterogeneous Agent Models Using Macro and Micro Data By Laura Liu; Mikkel Plagborg-M{\o}ller
  6. Polynomial chaos expansion: Efficient evaluation and estimation of computational models By Daniel Fehrle; Christopher Heiberger; Johannes Huber
  7. Empirical Decomposition of the IV-OLS Gap with Heterogeneous and Nonlinear Effects By Shoya Ishimaru
  8. The Variational Method of Moments By Andrew Bennett; Nathan Kallus
  9. Solving the Price Puzzle Via A Functional Coefficient Factor-Augmented VAR Model By Zongwu Cai; Xiyuan Liu
  10. A Multivariate GARCH-Jump Mixture Model By Li, Chenxing; Maheu, John M
  11. Partial Identification in Nonseparable Binary Response Models with Endogenous Regressors By Jiaying Gu; Thomas M. Russell
  12. Identifying the Latent Space Geometry of Network Models through Analysis of Curvature By Shane Lubold; Arun G. Chandrasekhar; Tyler H. McCormick
  13. Simple and Credible Value-Added Estimation Using Centralized School Assignment By Joshua Angrist; Peter Hull; Parag A. Pathak; Christopher R. Walters
  14. A Two-step System for Hierarchical Bayesian Dynamic Panel Data to deal with Endogeneity Issues, Structural Model Uncertainty, and Causal Relationship By Pacifico, Antonio
  15. Regression Discontinuity Design with Many Thresholds By Marinho Bertanha
  16. Spatial and Spatio-temporal Error Correction, Networks and Common Correlated Effects By Arnab Bhattacharjee; Jan Ditzen; Sean Holly
  17. Structural Panel Bayesian VAR with Multivariate Time-varying Volatility to jointly deal with Structural Changes, Policy Regime Shifts, and Endogeneity Issues By Pacifico, Antonio
  18. Now- and Backcasting Initial Claims with High-Dimensional Daily Internet Search-Volume Data By Daniel Borup; David E. Rapach; Erik Christian Montes Schütte
  19. Weak versus strong dominance of shrinkage estimators By Giuseppe De Luca; Jan R. Magnus
  20. Forecasting in a changing world: from the great recession to the COVID-19 pandemic By Mariia Artemova; Francisco Blasques; Siem Jan Koopman; Zhaokun Zhang
  21. Dynamic Ordering Learning in Multivariate Forecasting By Bruno P. C. Levy; Hedibert F. Lopes
  22. Deep Portfolio Optimization via Distributional Prediction of Residual Factors By Kentaro Imajo; Kentaro Minami; Katsuya Ito; Kei Nakagawa
  23. Estimation of Tempered Stable L\'{e}vy Models of Infinite Variation By Jos\'e E. Figueroa-L\'opez; Ruoting Gong; Yuchen Han
  24. Estimation of threshold distributions for market participation By Mattia Guerini; Patrick Musso; Lionel Nesta
  25. Oil and Fiscal Policy Regimes By Hilde Christiane Bjørnland; Roberto Casarin; Marco Lorusso; Francesco Ravazzolo
  26. Better Bunching, Nicer Notching By Marinho Bertanha; Andrew H. McCallum; Nathan Seegert

  1. By: Andrea A. Naghi (Erasmus University Rotterdam); Máté Váradi (Erasmus University Rotterdam); Mikhail Zhelonkin (Erasmus University Rotterdam)
    Abstract: Probit models with endogenous regressors are commonly used models in economics and other social sciences. Yet, the robustness properties of parametric estimators in these models have not been formally studied. In this paper, we derive the influence functions of the endogenous probit model’s classical estimators (the maximum likelihood and the two-step estimator) and prove their non-robustness to small but harmful deviations from distributional assumptions. We propose a procedure to obtain a robust alternative estimator, prove its asymptotic normality and provide its asymptotic variance. A simple robust test for endogeneity is also constructed. We compare the performance of the robust and classical estimators in Monte Carlo simulations with different types of contamination scenarios. The use of our estimator is illustrated in several empirical applications.
    Keywords: Binary outcomes, Probit model, Endogenous variable, Instrumental variable, Robust Estimation
    JEL: C26 C13 C18
    Date: 2021–01–14
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20210004&r=all
  2. By: Joshua C.C. Chan; Xuewen Yu
    Abstract: We propose a new variational approximation of the joint posterior distribution of the log-volatility in the context of large Bayesian VARs. In contrast to existing approaches that are based on local approximations, the new proposal provides a global approximation that takes into account the entire support of the joint distribution. In a Monte Carlo study we show that the new global approximation is over an order of magnitude more accurate than existing alternatives. We illustrate the proposed methodology with an application of a 96-variable VAR with stochastic volatility to measure global bank network connectedness. Our measure is able to detect the drastic increase in global bank network connectedness much earlier than rolling-window estimates from a homoscedastic VAR.
    Keywords: large vector autoregression, stochastic volatility, Variational Bayes, volatility network, connectedness
    JEL: C11 C32 C55 G21
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2020-108&r=all
  3. By: Yiyan Huang; Cheuk Hang Leung; Xing Yan; Qi Wu; Nanbo Peng; Dongdong Wang; Zhixiang Huang
    Abstract: This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.09448&r=all
  4. By: Bingduo Yang (Lingnan (University) College, Sun Yat-Sen University, Guangzhou, Guangdong 510275, China); Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Christian M. Hafner (Department of Economics, Tulane University, New Orleans, LA 70118, USA); Guannan Liu (School of Economics and WISE, Xiamen University, Xiamen, Fujian 361005, China)
    Abstract: Modeling the joint tails of multiple financial time series has many important implications for risk management. Classical models for dependence often encounter a lack of fit in the joint tails, calling for additional flexibility. This paper introduces a new semiparametric time-varying mixture copula model, in which both weights and dependence parameters are deterministic and unspecified functions of time. We propose penalized time-varying mixture copula models with group smoothly clipped absolute deviation penalty functions to do the estimation and copula selection simultaneously. Monte Carlo simulation results suggest that the shrinkage estimation procedure performs well in selecting and estimating both constant and time-varying mixture copula models. Using the proposed model and method, we analyze the evolution of the dependence among four international stock markets, and find substantial changes in the levels and patterns of the dependence, in particular around crisis periods.
    Keywords: Copula Selection; EM Algorithm; Mixture Copula; SCAD; Time-Varying Distribution.
    JEL: C14 C22
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202105&r=all
  5. By: Laura Liu; Mikkel Plagborg-M{\o}ller
    Abstract: We develop a generally applicable full-information inference method for heterogeneous agent models, combining aggregate time series data and repeated cross sections of micro data. To handle unobserved aggregate state variables that affect cross-sectional distributions, we compute a numerically unbiased estimate of the model-implied likelihood function. Employing the likelihood estimate in a Markov Chain Monte Carlo algorithm, we obtain fully efficient and valid Bayesian inference. Evaluation of the micro part of the likelihood lends itself naturally to parallel computing. Numerical illustrations in models with heterogeneous households or firms demonstrate that the proposed full-information method substantially sharpens inference relative to using only macro data, and for some parameters micro data is essential for identification.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.04771&r=all
  6. By: Daniel Fehrle (University of Augsburg, Department of Economics); Christopher Heiberger (University of Augsburg, Department of Economics); Johannes Huber (University of Augsburg, Department of Economics)
    Abstract: Polynomial chaos expansion (PCE) provides a method that enables the user to represent a quantity of interest (QoI) of a model's solution as a series expansion of uncertain model inputs, usually its parameters. Among the QoIs are the policy function, the second moments of observables, or the posterior kernel. Hence, PCE sidesteps the repeated and time consuming evaluations of the model's outcomes. The paper discusses the suitability of PCE for computational economics. We, therefore, introduce to the theory behind PCE, analyze the convergence behavior for different elements of the solution of the standard real business cycle model as illustrative example, and check the accuracy, if standard empirical methods are applied. The results are promising, both in terms of accuracy and efficiency.
    Keywords: Polynomial Chaos Expansion, parameter inference, parameter uncertainty, solution methods
    JEL: C11 C13 C32 C63
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:aug:augsbe:0341&r=all
  7. By: Shoya Ishimaru
    Abstract: This study proposes an econometric framework to interpret and empirically decompose the difference between IV and OLS estimates given by the linear regression equation when the true causal effects of the treatment are nonlinear in treatment levels and heterogeneous across covariates. I show that the IV-OLS coefficient gap consists of three estimable components: the difference in weights on the covariates, the difference in weights on the treatment levels, and the difference in identified marginal effects associated with endogeneity bias. Applications of this framework to return-to-schooling estimates demonstrate the empirical relevance of this distinction in properly interpreting the IV-OLS coefficient gap.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.04346&r=all
  8. By: Andrew Bennett; Nathan Kallus
    Abstract: The conditional moment problem is a powerful formulation for describing structural causal parameters in terms of observables, a prominent example being instrumental variable regression. A standard approach is to reduce the problem to a finite set of marginal moment conditions and apply the optimally weighted generalized method of moments (OWGMM), but this requires we know a finite set of identifying moments, can still be inefficient even if identifying, or can be unwieldy and impractical if we use a growing sieve of moments. Motivated by a variational minimax reformulation of OWGMM, we define a very general class of estimators for the conditional moment problem, which we term the variational method of moments (VMM) and which naturally enables controlling infinitely-many moments. We provide a detailed theoretical analysis of multiple VMM estimators, including based on kernel methods and neural networks, and provide appropriate conditions under which these estimators are consistent, asymptotically normal, and semiparametrically efficient in the full conditional moment model. This is in contrast to other recently proposed methods for solving conditional moment problems based on adversarial machine learning, which do not incorporate optimal weighting, do not establish asymptotic normality, and are not semiparametrically efficient.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.09422&r=all
  9. By: Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Xiyuan Liu (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA)
    Abstract: Effects of monetary policy shocks on large amounts of macroeconomic variables are identified by a new class of functional-coefficient factor-augmented vector autoregressive (FAVAR) models, which allows coefficients of classical FAVAR models to vary with some variable. In the empirical study, we analyze the impulse response functions estimated by the newly proposed model and compare our results with those from classical FAVAR models. Our empirical finding is that our new model has an ability to eliminate the well-known price puzzle without adding new variables into the dataset.
    Keywords: Factor-augmented vector autoregressive; Functional coefficient models; Impulse response functions; Nonparametric estimation; Price puzzle
    JEL: C14 C32 E30 E31
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202106&r=all
  10. By: Li, Chenxing; Maheu, John M
    Abstract: This paper proposes a new parsimonious multivariate GARCH-jump (MGARCH-jump) mixture model with multivariate jumps that allows both jump sizes and jump arrivals to be correlated among assets. Dependent jumps impact the conditional moments of returns as well as beta dynamics of a stock. Applied to daily stock returns, the model identifies co-jumps well and shows that both jump arrivals and jump sizes are highly correlated. The jump model has better predictions compared to a benchmark multivariate GARCH model.
    Keywords: Multivariate GARCH; Jumps; Multinomial; Co-jump; beta dynamics; Value-at-Risk
    JEL: C32 C53 C58 G1 G10
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:104770&r=all
  11. By: Jiaying Gu; Thomas M. Russell
    Abstract: This paper considers (partial) identification of a variety of parameters, including counterfactual choice probabilities, in a general class of binary response models with possibly endogenous regressors. Importantly, our framework allows for nonseparable index functions with multi-dimensional latent variables, and does not require parametric distributional assumptions. We demonstrate how various functional form, independence, and monotonicity assumptions can be imposed as constraints in our optimization procedure to tighten the identified set, and we show how these assumptions have meaningful interpretations in terms of restrictions on latent types. In the special case when the index function is linear in the latent variables, we leverage results in computational geometry to provide a tractable means of constructing the sharp set of constraints for our optimization problems. Finally, we apply our method to study the effects of health insurance on the decision to seek medical treatment.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.01254&r=all
  12. By: Shane Lubold; Arun G. Chandrasekhar; Tyler H. McCormick
    Abstract: Statistically modeling networks, across numerous disciplines and contexts, is fundamentally challenging because of (often high-order) dependence between connections. A common approach assigns each person in the graph to a position on a low-dimensional manifold. Distance between individuals in this (latent) space is inversely proportional to the likelihood of forming a connection. The choice of the latent geometry (the manifold class, dimension, and curvature) has consequential impacts on the substantive conclusions of the model. More positive curvature in the manifold, for example, encourages more and tighter communities; negative curvature induces repulsion among nodes. Currently, however, the choice of the latent geometry is an a priori modeling assumption and there is limited guidance about how to make these choices in a data-driven way. In this work, we present a method to consistently estimate the manifold type, dimension, and curvature from an empirically relevant class of latent spaces: simply connected, complete Riemannian manifolds of constant curvature. Our core insight comes by representing the graph as a noisy distance matrix based on the ties between cliques. Leveraging results from statistical geometry, we develop hypothesis tests to determine whether the observed distances could plausibly be embedded isometrically in each of the candidate geometries. We explore the accuracy of our approach with simulations and then apply our approach to data-sets from economics and sociology as well as neuroscience.
    JEL: C01 C12 C4 C52 C6 D85 L14
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:28273&r=all
  13. By: Joshua Angrist; Peter Hull; Parag A. Pathak; Christopher R. Walters
    Abstract: Many large urban school districts match students to schools using algorithms that incorporate an element of random assignment. We introduce two simple empirical strategies to harness this randomization for value-added models (VAMs) measuring the causal effects of individual schools. The first estimator controls for the probability of being offered admission to different schools, treating the take-up decision as independent of potential outcomes. Randomness in school assignments is used to test this key conditional independence assumption. The second estimator uses randomness in offers to generate instrumental variables (IVs) for school enrollment. This procedure uses a low-dimensional model of school quality mediators to solve the under-identification challenge arising from the fact that some schools are under-subscribed. Both approaches relax the assumptions of conventional value-added models while obviating the need for elaborate nonlinear estimators. In applications to data from Denver and New York City, we find that models controlling for both assignment risk and lagged achievement yield highly reliable VAM estimates. Estimates from models with fewer controls and older lagged score controls are improved markedly by IV.
    JEL: C11 C21 C26 C52 I21 I28 J24
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:28241&r=all
  14. By: Pacifico, Antonio
    Abstract: The paper develops a computational method implementing a standard Dynamic Panel Data model with Generalized Method of Moment (GMM) estimators to deal with endogeneity issues, structural model uncertainty, and causal relationship in large and long panel databases. The methodology takes the name of Two-step System Dynamic Panel Data, that combines a first-step Bayesian procedure for selecting the only potential predictors in a static linear regression model with a frequentist second-step procedure for estimating the parameters of a dynamic linear panel data model. An empirical example to the effects of obesity, socioeconomic variables, and individual-specific factors on labour market outcomes among Italian regions is performed. Potential prevention policies and strategies to address key behavioural and diseases risk factors affecting labour market outcomes and social environment are also discussed.
    Keywords: Bayesian Model Averaging; Dynamic Panel Data; Granger Causality; Labour Market Outcomes; Obesity.
    JEL: C1 D6 I1
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:104291&r=all
  15. By: Marinho Bertanha
    Abstract: Numerous empirical studies employ regression discontinuity designs with multiple cutoffs and heterogeneous treatments. A common practice is to normalize all the cutoffs to zero and estimate one effect. This procedure identifies the average treatment effect (ATE) on the observed distribution of individuals local to existing cutoffs. However, researchers often want to make inferences on more meaningful ATEs, computed over general counterfactual distributions of individuals, rather than simply the observed distribution of individuals local to existing cutoffs. This paper proposes a consistent and asymptotically normal estimator for such ATEs when heterogeneity follows a non-parametric function of cutoff characteristics in the sharp case. The proposed estimator converges at the minimax optimal rate of root-n for a specific choice of tuning parameters. Identification in the fuzzy case, with multiple cutoffs, is impossible unless heterogeneity follows a finite-dimensional function of cutoff characteristics. Under parametric heterogeneity, this paper proposes an ATE estimator for the fuzzy case that optimally combines observations to maximize its precision.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.01245&r=all
  16. By: Arnab Bhattacharjee (Heriot-Watt University and National Institute of Economic & Social Research, UK); Jan Ditzen (Free University of Bozen-Bolzano, Italy, and Center for Energy Economics Research and Policy (CEERP), Heriot-Watt University, Edinburgh, UK); Sean Holly (Faculty of Economics, University of Cambridge, UK)
    Abstract: We provide a way to represent spatial and temporal equilibria in terms of error correction models in a panel setting. This requires potentially two different processes for spatial or network dynamics, both of which can be expressed in terms of spatial weights matrices. The first captures strong cross-sectional dependence, so that a spatial difference, suitably defined, is weakly cross-section dependent (granular) but can be nonstationary. The second is a conventional weights matrix that captures short-run spatio-temporal dynamics as stationary and granular processes. In large samples, cross-section averages serve the first purpose and we propose the mean group, common corrrelated effects estimator together with multiple testing of cross-correlations to provide the short-run spatial weights. We apply this model to the 324 local authorities of England, and show that our approach is useful for modelling weak and strong cross-section dependence, together with partial adjustments to two long-run equilibrium relationships and short-run spatio-temporal dynamics, and provides exciting new insights.
    Keywords: Spatio-temporal dynamics; Error Correction Models; Weak and strong cross sectional dependence
    JEL: C21 C22 C23 R3
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:bzn:wpaper:bemps76&r=all
  17. By: Pacifico, Antonio
    Abstract: This paper improves a standard Structural Panel Bayesian Vector Autoregression model in order to jointly deal with issues of endogeneity, because of omitted factors and unobserved heterogeneity, and volatility, because of policy regime shifts and structural changes. Bayesian methods are used to select the best model solution for examining if international spillovers come from multivariate volatility, time variation, or contemporaneous relationship. An empirical application among Central-Eastern and Western Europe economies is conducted to describe the performance of the methodology, with particular emphasis on the Great recession and post-crisis periods. Findings from evidence-based forecasting are also addressed to evaluate the impact of an ongoing pandemic crisis on the global economy.
    Keywords: Structural Panel VAR; Bayesian Methods; Multivariate Volatility; Policy Regime Shifts Endogeneity Issues; Central-Eastern and Western Europe.
    JEL: C1 C5 E6
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:104292&r=all
  18. By: Daniel Borup (Aarhus University, CREATES and the Danish Finance Institute (DFI)); David E. Rapach (Washington University in St. Louis and Saint Louis University); Erik Christian Montes Schütte (Aarhus University, CREATES and the Danish Finance Institute (DFI))
    Abstract: We generate a sequence of now- and backcasts of weekly unemployment insurance initial claims (UI) based on a rich trove of daily Google Trends (GT) search-volume data for terms related to unemployment. To harness the information in a high-dimensional set of daily GT terms, we estimate predictive models using machine-learning techniques in a mixed-frequency framework. In a simulated out-of-sample exercise, now- and backcasts of weekly UI that incorporate the information in the daily GT terms substantially outperform models that ignore the information. The relevance of GT terms for predicting UI is strongly linked to the COVID-19 crisis.
    Keywords: Unemployment insurance, Internet search, Mixed-frequency data, Penalized regression, Neural network, Variable importance
    JEL: C45 C53 C55 E24 E27 J65
    Date: 2021–01–11
    URL: http://d.repec.org/n?u=RePEc:aah:create:2021-02&r=all
  19. By: Giuseppe De Luca (University of Palermo); Jan R. Magnus (Vrije Universiteit Amsterdam)
    Abstract: We consider the estimation of the mean of a multivariate normal distribution with known variance. Most studies consider the risk of competing estimators, that is the trace of the mean squared error matrix. In contrast we consider the whole mean squared error matrix, in particular its eigenvalues. We prove that there are only two distinct eigenvalues and apply our findings to the James--Stein and the Thompson class of estimators. It turns out that the famous Stein paradox is no longer a paradox when we consider the whole mean squared error matrix rather than only its trace.
    Keywords: Shrinkage, Dominance, James-Stein
    JEL: C13 C51
    Date: 2021–01–14
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20210007&r=all
  20. By: Mariia Artemova (Vrije Universiteit Amsterdam); Francisco Blasques (Vrije Universiteit Amsterdam); Siem Jan Koopman (Vrije Universiteit Amsterdam); Zhaokun Zhang (Shanghai University)
    Abstract: We develop a new targeted maximum likelihood estimation method that provides improved forecasting for misspecified linear autoregressive models. The method weighs data points in the observed sample and is useful in the presence of data generating processes featuring structural breaks, complex nonlinearities, or other time-varying properties which cannot be easily captured by model design. Additionally, the method reduces to classical maximum likelihood when the model is well specified, which results in weights which are set uniformly to one. We show how the optimal weights can be set by means of a cross-validation procedure. In a set of Monte Carlo experiments we reveal that the estimation method can significantly improve the forecasting accuracy of autoregressive models. In an empirical study concerned with forecasting the U.S. Industrial Production, we show that the forecast accuracy during the Great Recession can be significantly improved by giving greater weight to observations associated with past recessions. We further establish that the same empirical finding can be found for the 2008-2009 global financial crisis, for different macroeconomic time series, and for the COVID-19 recession in 2020.
    Keywords: Autoregressive Models, Cross-Validation, Kullback-Leibler Divergence, Stationarity and Ergodicity, Macroeconomic Time Series
    JEL: C10 C22 C32 C51
    Date: 2021–01–14
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20210006&r=all
  21. By: Bruno P. C. Levy; Hedibert F. Lopes
    Abstract: In many fields where the main goal is to produce sequential forecasts for decisionmaking problems, the good understanding of the contemporaneous relations among different series is crucial for the estimation of the covariance matrix. In recent years, the modified Cholesky decomposition appeared as a popular approach to covariance matrix estimation. However, its main drawback relies on the imposition of the series ordering structure. In this work, we propose a highly flexible and fast method to deal with the problem of ordering uncertainty in a dynamic fashion with the use of Dynamic Order Probabilities. We apply the proposed method in a dynamic portfolio allocation problem, where the investor is able to learn the contemporaneous relations among different currencies. We show that our approach generates not just significant statistical improvements, but also huge economic gains for a mean-variance investor relative to the Random Walk benchmark and using fixed orders over time.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.04164&r=all
  22. By: Kentaro Imajo; Kentaro Minami; Katsuya Ito; Kei Nakagawa
    Abstract: Recent developments in deep learning techniques have motivated intensive research in machine learning-aided stock trading strategies. However, since the financial market has a highly non-stationary nature hindering the application of typical data-hungry machine learning methods, leveraging financial inductive biases is important to ensure better sample efficiency and robustness. In this study, we propose a novel method of constructing a portfolio based on predicting the distribution of a financial quantity called residual factors, which is known to be generally useful for hedging the risk exposure to common market factors. The key technical ingredients are twofold. First, we introduce a computationally efficient extraction method for the residual information, which can be easily combined with various prediction algorithms. Second, we propose a novel neural network architecture that allows us to incorporate widely acknowledged financial inductive biases such as amplitude invariance and time-scale invariance. We demonstrate the efficacy of our method on U.S. and Japanese stock market data. Through ablation experiments, we also verify that each individual technique contributes to improving the performance of trading strategies. We anticipate our techniques may have wide applications in various financial problems.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.07245&r=all
  23. By: Jos\'e E. Figueroa-L\'opez; Ruoting Gong; Yuchen Han
    Abstract: In this paper we propose a new method for the estimation of a semiparametric tempered stable L\'{e}vy model. The estimation procedure combines iteratively an approximate semiparametric method of moment estimator, Truncated Realized Quadratic Variations (TRQV), and a newly found small-time high-order approximation for the optimal threshold of the TRQV of tempered stable processes. The method is tested via simulations to estimate the volatility and the Blumenthal-Getoor index of the generalized CGMY model as well as the integrated volatility of a Heston type model with CGMY jumps. The method outperforms other efficient alternatives proposed in the literature.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.00565&r=all
  24. By: Mattia Guerini (COMUE UCA - COMUE Université Côte d'Azur (2015 - 2019), GREDEG - Groupe de Recherche en Droit, Economie et Gestion - UNS - Université Nice Sophia Antipolis (... - 2019) - COMUE UCA - COMUE Université Côte d'Azur (2015 - 2019) - CNRS - Centre National de la Recherche Scientifique - UCA - Université Côte d'Azur); Patrick Musso (GREDEG - Groupe de Recherche en Droit, Economie et Gestion - UNS - Université Nice Sophia Antipolis (... - 2019) - COMUE UCA - COMUE Université Côte d'Azur (2015 - 2019) - CNRS - Centre National de la Recherche Scientifique - UCA - Université Côte d'Azur); Lionel Nesta (OFCE - Observatoire français des conjonctures économiques - Sciences Po - Sciences Po)
    Abstract: We develop a new method to estimate the parameters of threshold distributions for market participation based upon an agent-specific attribute and its decision outcome. This method requires few behavioral assumptions, is not data demanding, and can adapt to various parametric distributions. Monte Carlo simulations show that the algorithm successfully recovers three different parametric distributions and is resilient to assumption violations. An application to export decisions by French firms shows that threshold distributions are generally right-skewed. We then reveal the asymmetric effects of past policies over different quantiles of the threshold distributions.
    Keywords: Parametric Distributions of Thresholds,Maximum Likelihood Estimation,Fixed Costs,Export Decision
    Date: 2020–12–04
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-03040260&r=all
  25. By: Hilde Christiane Bjørnland; Roberto Casarin; Marco Lorusso; Francesco Ravazzolo
    Abstract: We analyse fiscal policy responses in oil rich countries by developing a Bayesian regime-switching panel country analysis. We use parameter restrictions to identify procyclical and countercyclical fiscal policy regimes over the sample in 23 OECD and non-OECD oil producing countries. We find that fiscal policy is switching between pro- and countercyclial regimes multiple times. Furthermore, for all countries, fiscal policy is more volatile in the countercyclical regime than in the procyclical regime. In the procyclical regime, however, fiscal policy is systematically more volatile and excessive in the non-OECD (including OPEC) countries than in the OECD countries. This suggests OECD countries are able to smooth spending and save more than the non-OECD countries. Our results emphasize that it is both possible and important to separate a procyclical regime from a countercyclical regime when analysing fiscal policy. Doing so, we have encountered new facts about fiscal policy in oil rich countries.
    Keywords: Dynamic Panel Model, Mixed-Frequency, Markov Switching, Bayesian Inference, Fiscal Policy, Resource Rich Countries, Oil Prices
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:bny:wpaper:0094&r=all
  26. By: Marinho Bertanha; Andrew H. McCallum; Nathan Seegert
    Abstract: We study the bunching identification strategy for an elasticity parameter that summarizes agents' response to changes in slope (kink) or intercept (notch) of a schedule of incentives. A notch identifies the elasticity but a kink does not, when the distribution of agents is fully flexible. We propose new non-parametric and semi-parametric identification assumptions on the distribution of agents that are weaker than assumptions currently made in the literature. We revisit the original empirical application of the bunching estimator and find that our weaker identification assumptions result in meaningfully different estimates. We provide the Stata package "bunching" to implement our procedures.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.01170&r=all

This nep-ecm issue is ©2021 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.