nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒11‒26
twenty-one papers chosen by
Sune Karlsson
Örebro universitet

  1. BOOTSTRAP INFERENCE ON THE BOUNDARY OF THE PARAMETER SPACE WITH APPLICATION TO CONDITIONAL VOLATILITY MODELS By Giuseppe Cavaliere; Heino Bohn Nielsen; Rasmus Søndergaard Pedersen; Anders Rahbek
  2. Quasi-Maximum Likelihood and the Kernel Block Bootstrap for Nonlinear Dynamic Models By Paulo M.D.C. Parente; Richard J. Smith
  3. Causal Inference from Strip-Plot Designs: Methodology and Applications in a Potential Outcomes Framework By Rahul Mukherjee
  4. TESTING IDENTIFYING ASSUMPTIONS IN FUZZY REGRESSION DISCONTINUITY DESIGNS By Yoichi Arai; Yu-Chin Hsu; Toru Kitagawa; Ismael Mourifie; Yuanyuan Wan
  5. The Grid Bootstrap for Continuous Time Models By Lui, Yiu Lim; Xiao, Weilin; Yu, Jun
  6. Mean Group Estimation in Presence of Weakly Cross-Correlated Estimators By Chudik, Alexander; Pesaran, M. Hashem
  7. The Ensemble Method For Censored Demand Prediction By Evgeniy M. Ozhegov; Daria Teterina
  8. SHARP BOUNDS AND TESTABILITY OF A ROY MODEL OF STEM MAJOR CHOICES By Ismael Mourifie; Marc Henry; Romuald Meango
  9. Adaptive estimation using records data under asymmetric loss, with applications By Saibal Chattopadhyay
  10. Forecasting using mixed-frequency VARs with time-varying parameters By Markus Heinrich; Magnus Reif
  11. Beating the Simple Average: Egalitarian LASSO for Combining Economic Forecasts By Francis X. Diebold; Minchul Shin
  12. Sharp bounds on the MTE with sample selection By Possebom, Vitor
  13. Asymptotically unbiased inference for a panel VAR model with p lags By Juan Sebastian Cubillos-Rocha; Luis Fernando Melo-Velandia
  14. Inference under a new exponential-exponential loss capturing specified penalties for over- and under-estimation By UTTAM KUMAR SARKAR
  15. Inference for the neighborhood inequality index By ANDREOLI Francesco
  16. Machine Learning Estimation of Heterogeneous Causal Effects: Empirical Monte Carlo Evidence By Michael C. Knaus; Michael Lechner; Anthony Strittmatter
  17. Dealing with corner solutions in multi-crop micro-econometric models: an endogenous regime approach with regime fixed costs By Koutchad, P.; Carpentier, A.; Femenia, F.
  18. Affine Jump-Diffusions: Stochastic Stability and Limit Theorems By Xiaowei Zhang; Peter W. Glynn
  19. An empirical study of the behaviour of the sample kurtosis in samples from symmetric stable distributions By J. Martin van Zyl
  20. Modeling of Economic and Financial Conditions for Nowcasting and Forecasting Recessions: A Unified Approach By Altug, Sumru G.; Cakmakli, Cem; Demircan, Hamza
  21. The Bias of Realized Volatility By Becker, Janis; Leschinski, Christian

  1. By: Giuseppe Cavaliere (Department of Economics, University of Bologna, Italy); Heino Bohn Nielsen (Department of Economics, University of Copenhagen, Denmark); Rasmus Søndergaard Pedersen (Department of Economics, University of Copenhagen, Denmark); Anders Rahbek (Department of Economics, University of Copenhagen, Denmark)
    Abstract: It is a well-established fact that testing a null hypothesis on the boundary of the parameter space, with an unknown number of nuisance parameters at the boundary, is infeasible in practice in the sense that limiting distributions of standard test statistics are non-pivotal. In particular, likelihood ratio statistics have limiting distributions which can be characterized in terms of quadratic forms minimized over cones, where the shape of the cones depends on the unknown location of the (possibly mulitiple) model parameters not restricted by the null hypothesis. We propose to solve this inference problem by a novel bootstrap, which we show to be valid under general conditions, irrespective of the presence of (unknown) nuisance parameters on the boundary. That is, the new bootstrap replicates the unknown limiting distribution of the likelihood ratio statistic under the null hypothesis and is bounded (in probability) under the alternative. The new bootstrap approach, which is very simple to implement, is based on shrinkage of the parameter estimates used to generate the bootstrap sample toward the boundary of the parameter space at an appropriate rate. As an application of our general theory, we treat the problem of inference in ?nite-order ARCH models with coefficients subject to inequality constraints. Extensive Monte Carlo simulations illustrate that the proposed bootstrap has attractive ?nite sample properties both under the null and under the alternative hypothesis.
    Keywords: Inference on the boundary, Nuisance parameters on the boundary, ARCH models, Bootstrap
    JEL: C12 C22
    Date: 2018–11–12
    URL: http://d.repec.org/n?u=RePEc:kud:kuiedp:1810&r=ecm
  2. By: Paulo M.D.C. Parente; Richard J. Smith
    Abstract: This paper applies a novel bootstrap method, the kernelblockbootstrap, to quasi-maximum likelihood estimation of dynamic models with stationary strong mixing data. The method rst kernel weights the components comprising the quasi-log likelihood function in an appropriate way and then samples the resultant transformed components using the standard "m out of n"bootstrap. We investigate the first order asymptotic properties of the KBB method for quasi-maximum likelihood demonstrating, in particular, its consistency and the rst-order asymptotic validity of the bootstrap approximation to the distribution of the quasi-maximum likelihood estimator. A set of simulation experiments for the mean regression model illustrates the efficacy of the kernel block bootstrap for quasi-maximum likelihood estimation.
    Keywords: Bootstrap; heteroskedastic and autocorrelation consistent inference; quasi-maximum likelihood estimation.
    JEL: C14 C15 C22
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:ise:remwps:wp0592018&r=ecm
  3. By: Rahul Mukherjee (Indian Institute of Management Calcutta)
    Abstract: Randomization based causal inference in a potential outcomes framework has been of significant interest in recent years. The principal advantage of such inference is that it does not involve rigid model assumptions. We develop the methodology of causal inference from strip-plot designs that are very useful when the responses are influenced by treatments having a factorial structure and the factor levels are hard-to-change, so that they have to be applied to larger clusters of experimental units. Our results can have application to diverse fields such as sociology, agriculture, and urban management, to name only a few. For example, in an agricultural field experiment with two factors, irrigation and harvesting, both requiring larger plots, the experimental units can be laid out in several blocks, each block being a rectangular array of rows and columns. One can then employ a strip-plot design that randomizes the methods of irrigation among the rows and the methods of harvesting among the columns, in each block. Similarly, a strip-plot design is a natural choice in urban traffic management where each block is rectangular grid of streets, and within any such grid, signaling conditions are randomized among the north-south streets while traffic rules are randomized among east-west streets. With a strip-plot design under a potential outcomes framework, we propose an unbiased estimator for any treatment contrast and work out an expression for its sampling variance. We next obtain a conservative estimator of the sampling variance. This conservative estimator has a nonnegative bias, and becomes unbiased under a condition that is much milder than the age-old Neymannian strict additivity. A minimaxity property of this variance estimator is also established. Simulation results on the coverage of resulting confidence intervals lend support to theoretical considerations.
    Keywords: Between-block additivity; conservative variance estimator; minimaxity; treatment contrast; unbiased estimator.
    JEL: C10 C14
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:8109733&r=ecm
  4. By: Yoichi Arai; Yu-Chin Hsu; Toru Kitagawa; Ismael Mourifie; Yuanyuan Wan
    Abstract: We propose a new specification test for assessing the validity of fuzzy regression discontinuity designs (FRD-validity). We derive a new set of testable implications, characterized by a set of inequality restrictions on the joint distribution of observed outcomes and treatment status at the cut-off. We show that this new characterization exploits all the information in the data useful for detecting violations of FRD-validity. Our approach differs from, and complements existing approaches that test continuity of the distributions of running variables and baseline covariates at the cut-off since ours focuses on the distribution of the observed outcome and treatment status. We show that the proposed test has appealing statistical properties. It controls size in large sample uniformly over a large class of distributions, is consistent against all fixed alternatives, and has non-trivial power against some local alternatives. We apply our test to evaluate the validity of two FRD designs. The test does not reject the FRD-validity in the class size design studied by Angrist and Lavy (1999) and rejects in the insurance subsidy design for poor households in Colombia studied by Miller, Pinto, and Vera-Hernández (2013) for some outcome variables, while existing density tests suggest the opposite in each of the cases.
    Keywords: Fuzzy regression discontinuity design, nonparametric test, inequality restriction, multiplier bootstrap.
    JEL: C12 C15 C21 C24
    Date: 2018–11–12
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-623&r=ecm
  5. By: Lui, Yiu Lim (School of Economics, Singapore Management University); Xiao, Weilin (School of Management, Zhejiang University); Yu, Jun (School of Economics, Singapore Management University)
    Abstract: This paper considers the grid bootstrap for constructing confidence intervals for the persistence parameter in a class of continuous time models driven by a Levy process. Its asymptotic validity is established by assuming the sampling interval (h) shrinks to zero. Its improvement over the in-fill asymptotic theory is achieved by expanding the coefficient-based statistic around its in fill asymptotic distribution which is non-pivotal and depends on the initial condition. Monte Carlo studies show that the gird bootstrap method performs better than the in-fill asymptotic theory and much better than the longspan theory. Empirical applications to U.S. interest rate data highlight differences between the bootstrap confidence intervals and the confidence intervals obtained from the in-fill and long-span asymptotic distributions.
    Keywords: Grid bootstrap; In-fill asymptotics; Continuous time models; Long-span asymptotics.
    JEL: C11 C12
    Date: 2018–11–09
    URL: http://d.repec.org/n?u=RePEc:ris:smuesw:2018_020&r=ecm
  6. By: Chudik, Alexander (Federal Reserve Bank of Dallas); Pesaran, M. Hashem (University of Southern California)
    Abstract: This paper extends the mean group (MG) estimator for random coefficient panel data models by allowing the underlying individual estimators to be weakly cross-correlated. Weak cross-sectional dependence of the individual estimators can arise, for example, in panels with spatially correlated errors. We establish that the MG estimator is asymptotically correctly centered, and its asymptotic covariance matrix can be consistently estimated. The random coefficient specification allows for correct inference even when nothing is known about the weak cross-sectional dependence of the errors. This is in contrast to the well-known homogeneous case, where cross-sectional dependence of errors results in incorrect inference unless the nature of the cross-sectional error dependence is known and can be taken into account. Evidence on small sample performance of the MG estimators is provided using Monte Carlo experiments with both strictly and weakly exogenous regressors and cross-sectionally correlated innovations.
    Keywords: Mean Group Estimator; Cross-Sectional Dependence; Spatial Models; Panel Data
    JEL: C12 C13 C23
    Date: 2018–11–14
    URL: http://d.repec.org/n?u=RePEc:fip:feddgw:349&r=ecm
  7. By: Evgeniy M. Ozhegov (National Research University Higher School of Economics); Daria Teterina (National Research University Higher School of Economics)
    Abstract: Many economic applications, including optimal pricing and inventory management, require predictions of demand based on sales data and the estimation of the reaction of sales to price change. There is a wide range of econometric approaches used to correct biases in the estimates of demand parameters on censored sales data. These approaches can also be applied to various classes of machine learning (ML) models to reduce the prediction error of sales volumes. In this study we construct two ensemble models for demand prediction with and without accounting for demand censorship. Accounting for sales censorship is based on a censored quantile regression where the model estimation was split into two separate parts: a) a prediction of zero sales by the classification model; and b) a prediction of non-zero sales by the regression model. Models with and without censorship are based on the prediction aggregations of least squares, Ridge and Lasso regressions and the Random Forest model. Having estimated the predictive properties of both models, we empirically test the best predictive power of the model taking into account the censored nature of demand. We also show that ML with censorship provides bias corrected estimates of demand sensitivity to price change similar to econometric models
    Keywords: demand, censorship, machine learning, prediction.
    JEL: D12 C24 C53
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:hig:wpaper:200/ec/2018&r=ecm
  8. By: Ismael Mourifie; Marc Henry; Romuald Meango
    Abstract: We analyze the empirical content of the Roy model, stripped down to its essential features, namely sector specific unobserved heterogeneity and self-selection on the basis of potential outcomes. We characterize sharp bounds on the joint distribution of potential outcomes and testable implications of the Roy self-selection model under an instrumental constraint on the joint distribution of potential outcomes we call stochastically monotone instrumental variable (SMIV). We show that testing the Roy model selection is equivalent to testing stochastic monotonicity of observed outcomes relative to the instrument. We apply our sharp bounds to the derivation of a measure of departure from Roy self-selection to identify values of observable characteristics that induce the most costly misallocation of talent and sector and are therefore prime targets for intervention. Special emphasis is put on the case of binary outcomes, which has received little attention in the literature to date. For richer sets of outcomes, we emphasize the distinction between point-wise sharp bounds and functional sharp bounds, and its importance, when constructing sharp bounds on functional features, such as inequality measures. We analyze a Roy model of college major choice in Canada and Germany within this framework, and we take a new look at the under-representation of women in STEM.
    Keywords: Roy model, sectorial choice, partial identification, stochastic monotonicity, intersection bounds, functional sharp bounds, inequality, optimal transport, returns to education, college major, gender profiling, STEM, SMIV.
    JEL: C31 C34 C35 I21 J24
    Date: 2018–11–12
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-624&r=ecm
  9. By: Saibal Chattopadhyay (Indian Institute of Management Calcutta)
    Abstract: We consider a scenario where data are accessible in terms of record values, as can happen in a wide range of practical situations. Examples include the hottest day ever, the lowest stock market figure, auction prices of an item in bidding, etc. Such data can be analyzed as record values from a sequence of observations, an upper or lower record value being one that is larger or smaller, respectively, than all previous observations. The literature on classical theory of records and its several variants is quite rich. A significant literature also exists in reliability theory and associated areas. Not much work has, however, been done so far using records data when over and under estimation of the parameter of interest attract unequal penalties, even though there is a compelling need for considering such an asymmetric loss function whenever the consequences of over and under estimation are not identical. This can happen in such diverse fields of application as real estate management, accounting, reliability analysis, and so on.From the above perspective, we consider the estimation problem based on records data for the scale parameter of an exponential family of distributions under an asymmetric linear-exponential loss function. With a view to controlling the associated risk, we also aim at ensuring a pre-assigned upper bound on it. In the absence of a known and fixed sample size solution to this problem, we consider an adaptive sampling methodology ? for example, a one at a time purely sequential sampling rule. We suggest various estimators of the scale parameter and compare their performances to address the admissibility and other related issues. Monte-Carlo simulations lend strong support to our theory and methodology.
    Keywords: Bounded risk; exponential family; LINEX loss function; purely sequential
    JEL: C18 C13 C00
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:8109809&r=ecm
  10. By: Markus Heinrich; Magnus Reif
    Abstract: We extend the literature on economic forecasting by constructing a mixed-frequency time-varying parameter vector autoregression with stochastic volatility (MF-TVP-SVVAR). The latter is able to cope with structural changes and can handle indicators sampled at different frequencies. We conduct a real-time forecast exercise to predict US key macroeconomic variables and compare the predictions of the MF-TVP-SV-VAR with several linear, nonlinear, mixed-frequency, and quarterly-frequency VARs. Our key finding is that the MF-TVPSV-VAR delivers very accurate forecasts and, on average, outperforms its competitors. In particular, inflation forecasts benefit from this new forecasting approach. Finally, we assess the models’ performance during the Great Recession and find that the combination of stochastic volatility, time-varying parameters, and mixed-frequencies generates very precise inflation forecasts.
    Keywords: Time-varying parameters, forecasting, mixed-frequency models, Bayesian methods
    JEL: C11 C53 E32
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:ces:ifowps:_273&r=ecm
  11. By: Francis X. Diebold (Department of Economics, University of Pennsylvania); Minchul Shin (Department of Economics, University of Illinois)
    Abstract: Despite the clear success of forecast combination in many economic environments, several important issues remain incompletely resolved. The issues relate to selection of the set of forecasts to combine, and whether some form of additional regularization (e.g., shrinkage) is desirable. Against this background, and also considering the frequently-found superiority of simple-average combinations, we propose LASSO-based procedures that select and shrink toward equal combining weights. We then provide an empirical assessment of the performance of our "egalitarian LASSO" procedures. The results indicate that simple averages are highly competitive, and that although out-of-sample RMSE improvements on simple averages are possible in principle using our methods, they are hard to achieve in real time, due to the intrinsic difficulty of small-sample real-time cross validation of the LASSO tuning parameter. We therefore propose alternative direct combination procedures, most notably "best average" combination, motivated by the structure of egalitarian LASSO and the lessons learned, which do not require choice of a tuning parameter yet outperform simple averages.
    Keywords: Forecast combination, forecast surveys, shrinkage, model selection, LASSO, regularization
    JEL: C53
    Date: 2017–08–20
    URL: http://d.repec.org/n?u=RePEc:pen:papers:17-017&r=ecm
  12. By: Possebom, Vitor
    Abstract: I propose a Generalized Roy Model with sample selection that can be used to analyze treatment effects in a variety of empirical problems. First, I decompose, under a monotonicity assumption on the sample selection indicator, the MTR function for the observed outcome when treated as a weighted average of (i) the MTR on the outcome of interest for the always-observed sub-population and (ii) the MTE on the observed outcome for the observed-only-when-treated sub-population, and show that such decomposition can provide point-wise sharp bounds on the MTE of interest. I, then, show how to point-identify these bounds when the support of the propensity score is continuous. After that, I show how to (partially) identify the MTE of interest when the support of the propensity score is discrete.
    Keywords: Marginal Treatment Effect; Sample Selection; Selection into Treatment; Partial Identification
    JEL: C31 C35 C36
    Date: 2018–10–30
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:89785&r=ecm
  13. By: Juan Sebastian Cubillos-Rocha (Banco de la República de Colombia); Luis Fernando Melo-Velandia (Banco de la República de Colombia)
    Abstract: Panel dynamic estimators with fixed effects are biased due to the incidental parameters problem. At this regard, Hahn and Kuersteiner (2002) proposed an estimator to correct this issue. However, they only consider a panel VAR (PVAR) model with one lag. In this paper we extend this bias correction, its asymptotic and small sample properties for a more general case, a PVAR model with p lags. The simulation results indicate that the bias corrected estimator outperforms the OLS panel VAR estimator when sample size in time dimension is small, and when the persistence of the model is low. In these cases, the proposed estimator improves significantly in terms of both, the reduction of bias and mean square error. **** RESUMEN: Los estimadores de los parámetros de un modelo panel dinámico de efectos fijos son sesgados debido al problema de parámetros incidentales. Al respecto, Hahn y Kuersteiner (2002) proponen un estimador para corregir este problema. Sin embargo, ellos consideran únicamente un modelo panel VAR con un sólo un rezago. En este documento analizamos las propiedades asintóticas y de muestra pequeña del estimador corregido por sesgo para un caso más general, un modelo PVAR con p rezagos. Los resultados de las simulaciones indican que el estimador corregido por sesgo tiene un mejor desempeño con respecto al estimador panel VAR MCO cuando la dimensión temporal de la muestra (T) es pequeña, y cuando la persistencia del modelo es baja. En estos casos, el estimador propuesto presenta una disminución significativa en términos de sesgo, y de error cuadrático medio.
    Keywords: Panel VAR models; bias correction; restricted OLS, Modelos Panel VAR; corrección de sesgo; MCO restringido.
    JEL: C33 C51 C13
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:bdr:borrec:1059&r=ecm
  14. By: UTTAM KUMAR SARKAR (INDIAN INSTITUTE OF MANAGEMENT CALCUTTA)
    Abstract: Asymmetric loss functions have gained enormous importance over the years, with particular relevance to situations where over- and under-estimation of the parameter of interest are considered not of equal consequence. In particular, the linear-exponential (LINEX) loss has been studied and used quite extensively in classical and Bayesian inference. While LINEX loss nicely captures whether over- or under-estimation has a more serious impact, it falls short of incorporating any prior knowledge about the relative penalty for over- vis-à-vis that for under-estimation. Thus, if such prior knowledge is available as happens in many practical situations, notably in finance, medicine and reliability theory, among others, then there is a pressing need for devising a loss function that accounts for this information and hence is more realistic than the LINEX loss. More specifically, suppose the ground realities in a given situation demand that over-estimation needs to be penalized k times the penalty of under-estimation, where k is known. Clearly, over-estimation gets more penalized than under-estimation if k > 1 and it is the other way round if k
    Keywords: Estimation, Squared error loss, Asymmetric loss
    JEL: C18 C13 C00
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:8109788&r=ecm
  15. By: ANDREOLI Francesco
    Abstract: The neighborhood inequality (NI) index measures aspects of spatial inequality in the distribution of incomes within the city. The NI index is defi ned as a population average of the normalized income gap between each individual's income (observed at a given location in the city) and the incomes of the neighbors, living within a certain distance range from that individual. This paper provides minimum bounds for the NI index standard error and shows that unbiased estimators can be identifi ed under fairly common hypothesis in spatial statistics. These estimators are shown to depend exclusively on the variogram, a measure of spatial dependence in the data. Rich income data are then used to infer about trends of neighborhood inequality in Chicago, IL over the last 35 years. Results from a Monte Carlo study support the relevance of the standard error approximations.
    Keywords: income inequality; individual neighborhood; geostatistics; variogram; census; ACS; ratio measures; variance approximation; Chicago; Monte Carlo
    JEL: C12 C46 D63 R23
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:irs:cepswp:2018-19&r=ecm
  16. By: Michael C. Knaus; Michael Lechner; Anthony Strittmatter
    Abstract: We investigate the finite sample performance of causal machine learning estimators for heterogeneous causal effects at different aggregation levels. We employ an Empirical Monte Carlo Study that relies on arguably realistic data generation processes (DGPs) based on actual data. We consider 24 different DGPs, eleven different causal machine learning estimators, and three aggregation levels of the estimated effects. In the main DGPs, we allow for selection into treatment based on a rich set of observable covariates. We provide evidence that the estimators can be categorized into three groups. The first group performs consistently well across all DGPs and aggregation levels. These estimators have multiple steps to account for the selection into the treatment and the outcome process. The second group shows competitive performance only for particular DGPs. The third group is clearly outperformed by the other estimators.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.13237&r=ecm
  17. By: Koutchad, P.; Carpentier, A.; Femenia, F.
    Abstract: Corner solution problems are pervasive in micro-econometric acreage choice models because farmers rarely produce the same crop set in a considered sample. These crop choice problems raise significant modelling issues. The main aim of this paper is to propose an endogenous regime switching model specifically designed for empirically modeling acreage choices with corner solutions. Contrary to models based on censored regression systems that have been used until now, this model is fully coherent from a micro-economic point of view. It also includes regime fixed costs accounting for unobserved costs which only depend on the set of crops grown simultaneously. We illustrate the empirical tractability of this model by estimating an endogenous regime switching multi-crop model for a panel dataset of French farmers. The estimated model considers yield supply, input demand and acreage share models for 7 crops and considers 8 production regimes. It also accounts for unobserved heterogeneity in farmers behaviors through the specification of random parameters. We assume that most model parameters are farmer specific and estimate their distribution across the farmers population represented by our sample. Our results show that farmers production acreage choice mechanisms strongly depend on the production regime in which this choice takes place. Acknowledgement :
    Keywords: Crop Production/Industries
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:ags:iaae18:277530&r=ecm
  18. By: Xiaowei Zhang; Peter W. Glynn
    Abstract: Affine jump-diffusions constitute a large class of continuous-time stochastic models that are particularly popular in finance and economics due to their analytical tractability. Methods for parameter estimation for such processes require ergodicity in order establish consistency and asymptotic normality of the associated estimators. In this paper, we develop stochastic stability conditions for affine jump-diffusions, thereby providing the needed large-sample theoretical support for estimating such processes. We establish ergodicity for such models by imposing a `strong mean reversion' condition and a mild condition on the distribution of the jumps, i.e. the finiteness of a logarithmic moment. Exponential ergodicity holds if the jumps have a finite moment of a positive order. In addition, we prove strong laws of large numbers and functional central limit theorems for additive functionals for this class of models.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1811.00122&r=ecm
  19. By: J. Martin van Zyl
    Abstract: Kurtosis is seen as a measure of the discrepancy between the observed data and a Gaussian distribution. In this work an empirical study is conducted to investigate the behaviour of the sample estimate of kurtosis with respect to sample size and the tail index. The study will be focus on samples from the symmetric stable distributions. It was found that the expected value of excess kurtosis divided by the sample size is finite for any value of the tail index and the sample estimate of kurtosis increase as a linear function of sample size and tail index.
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1811.00476&r=ecm
  20. By: Altug, Sumru G.; Cakmakli, Cem; Demircan, Hamza
    Abstract: This paper puts forward a unified framework for the joint estimation of the indexes that can broadly capture economic and financiall conditions together with their cyclical regimes of recession and expansion. We do this by utilizing a dynamic factor model together with Markov regime switching dynamics of model parameters that specifically exploit the temporal link between the cyclical behavior of economic and financial factors. This is achieved by constructing the cycle in the financial factor using the cycle in the economic factor together with phase shifts. The resulting framework allows the financial cycle to potentially lead/lag the business cycle in a systematic manner and exploits the information in economic and financial variables for estimation of both economic and financial conditions as well as their cyclical behavior in an efficient way. We examine the potential of the model using a mixed frequency and mixed time span ragged-edge dataset for Turkey. Comparison of our framework with more conventional polar cases imposing a single common cyclical dynamics as well as independent cyclical dynamics for economic and financial conditions reveal that the proposedspecification provides precise estimates of economic and financial conditions and it delivers quite accurate probabilities of recessions that match with stylized facts. We further conduct a recursive real-time exercise of nowcasting and forecasting business cycle turning points. The results show convincing evidence of superior predictive power of our specification by signaling oncoming recessions (expansions) as early as 3.5 (3.4) months ahead of the actual realization.
    Keywords: Bayesian inference; Business cycle; Coincident economic index; Dynamic factor model; Financial conditions index; Markov switching
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:13171&r=ecm
  21. By: Becker, Janis; Leschinski, Christian
    Abstract: Realized volatility underestimates the variance of daily stock index returns by an average of 14 percent. This is documented for a wide range of international stock indices, using the fact that the average of realized volatility and that of squared returns should be the same over longer time horizons. It is shown that the magnitude of this bias cannot be explained by market microstructure noise. Instead, it can be attributed to correlation between the continuous components of intraday returns and correlation between jumps and previous/subsequent continuous price movements.
    Keywords: Return Volatility; Realized Volatility; Squared Returns
    JEL: G11 G12 G17
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-642&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.