nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒10‒04
25 papers chosen by
Sune Karlsson
Örebro universitet

  1. A Fixed-bandwidth View of the Pre-asymptotic Inference for Kernel Smoothing with Time Series Data By Min Seong Kim; Yixiao Sun; Jingjing Yang
  2. Cross-sectional Dependence in Idiosyncratic Volatility By Ilze KALNINA; Kokouvi TEWOU
  3. Efficient Computation of the Quasi Likelihood function for Discretely Observed Diffusion Processes By Lars Josef H\"o\"ok; Erik Lindstr\"om
  4. Inference from high-frequency data: A subsampling approach By Kim Christensen; Mark Podolskij; Nopporn Thamrongrat; Bezirgen Veliyev
  5. Data-Dependent Methods for the Lag Length Selection in Unit Root Tests with Structural Change By Ricardo Quineche; Gabriel Rodríguez
  6. Inference and testing on the boundary in extended constant conditional correlation GARCH models By Rasmus Søndergaard Pedersen
  7. Point and Density Forecasts Using an Unrestricted Mixed-Frequency VAR Model By Fady Barsoum
  8. Hausman tests for the error distribution in conditionally heteroskedastic models By Zhu, Ke
  9. Discovering common trends in a large set of disaggregates: statistical procedures and their properties By Guillermo Carlomagno; Antoni Espasa
  10. On the Identification of Multivariate Correlated Unobserved Components Models By Trenkler, Carsten; Weber, Enzo
  11. Causal Influence for Ex-post Evaluation of Transport Interventions By Daniel J. Graham
  12. Optimal trading strategies - a time series approach By Peter A. Bebbington; Reimer Kuehn
  13. Changes in the Factor Structure of the U.S. Economy: Permanent Breaks or Business Cycle Regimes? By Luke Hartigan
  14. Estimation of Multivariate Probit Models via Bivariate Probit By John Mullahy
  15. Maximum likelihood estimators for a jump-type Heston model By Matyas Barczy; Mohamed Ben Alaya; Ahmed Kebaier; Gyula Pap
  16. Principal Component Analysis of High Frequency Data By Yacine Aït-Sahalia; Dacheng Xiu
  17. Dynamic models for monetary transmission By Paolo Giudici; Laura Parisi
  18. Forecasting a large set of disaggregates with common trends and outliers By Guillermo Carlomagno; Antoni Espasa
  19. Grid and shake - Spatial aggregation and robustness of regionally estimated elasticities By Gabor Bekes; Peter Harasztosi
  20. Clinical trial design enabling epsilon-optimal treatment rules By Charles F. Manski; Aleksey Tetenov
  21. Rough electricity: a new fractal multi-factor model of electricity spot prices By Mikkel Bennedsen
  22. Multivariate dynamic intensity peaks-over-threshold models By Hautsch, Nikolaus; Herrera, Rodrigo
  23. Semiparametric Instrumental Variable Estimation in an Endogenous Treatment Model By Roger Klein; Chan Shen
  24. Disentangling irregular cycles in economic time series By Schober, Dominik; Woll, Oliver
  25. Structural Changes in Inflation Dynamics: A Bayesian Analysis Allowing for Multiple Breaks at Different Dates for Different Parameters By Eo, Yunjong

  1. By: Min Seong Kim (Department of Economics, Ryerson University, Toronto, Canada); Yixiao Sun (Department of Economics, UC San Diego); Jingjing Yang (University of Nevada, Reno, NV)
    Abstract: This paper develops robust testing procedures for nonparametric kernel methods in the presence of temporal dependence of unknown forms. Based on the ?fixed-bandwidth asymptotic variance and the pre-asymptotic variance, we propose a heteroskedasticity and autocorrelation robust (HAR) variance estimator that achieves double robustness ? it is asymptotically valid regardless of whether the temporal dependence is present or not, and whether the kernel smoothing bandwidth is held constant or allowed to decay with the sample size. Using the HAR variance estimator, we construct the studentized test statistic and examine its asymptotic properties under both the fi?xed-smoothing and increasing-smoothing asymptotics. The ?fixed-smoothing approximation and the associated convenient t-approximation achieve extra robustness ? it is asymptotically valid regardless of whether the truncation lag parameter governing the covariance weighting grows at the same rate as or a slower rate than the sample size. Finally, we suggest a simulation-based calibration approach to choose smoothing parameters that optimize testing oriented criteria. Simulation shows that the proposed procedures work very well in ?finite samples.
    Keywords: heteroskedasticity and autocorrelation robust variance, calibration, fi?xed-smoothing asymptotics, ?fixed-bandwidth asymptotics, kernel density estimator, local polynomial estimator, t-approximation, testing-optimal smoothing-parameters choice, temporal dependence
    JEL: C12 C14 C22
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:rye:wpaper:wp049&r=all
  2. By: Ilze KALNINA; Kokouvi TEWOU
    Abstract: This paper introduces a framework for analysis of cross-sectional dependence in the idiosyncratic volatilities of assets using high frequency data. We first consider the estimation of standard measures of dependence in the idiosyncratic volatilities such as covariances and correlations. Next, we study an idiosyncratic volatility factor model, in which we decompose the co-movements in idiosyncratic volatilities into two parts: those related to factors such as the market volatility, and the residual co-movements. When using high frequency data, naive estimators of all of the above measures are biased due to the estimation errors in idiosyncratic volatility. We provide bias-corrected estimators and establish their asymptotic properties. We apply our estimators to high-frequency data on 27 individual stocks from nine different sectors, and document strong cross-sectional dependence in their idiosyncratic volatilities. We also find that on average 74% of this dependence can be explained by the market volatility.
    Keywords: high frequency data, idiosyncratic volatility, factor structure, cross-sectional returns
    JEL: C22 C14
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:mtl:montec:08-2015&r=all
  3. By: Lars Josef H\"o\"ok; Erik Lindstr\"om
    Abstract: We introduce a simple method for nearly simultaneous computation of all moments needed for quasi maximum likelihood estimation of parameters in discretely observed stochastic differential equations commonly seen in finance. The method proposed in this papers is not restricted to any particular dynamics of the differential equation and is virtually insensitive to the sampling interval. The key contribution of the paper is that computational complexity is sublinear in the number of observations as we compute all moments through a single operation. Furthermore, that operation can be done offline. The simulations show that the method is unbiased for all practical purposes for any sampling design, including random sampling, and that the computational cost is comparable (actually faster for moderate and large data sets) to the simple, often severely biased, Euler-Maruyama approximation.
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1509.07751&r=all
  4. By: Kim Christensen (Aarhus University and CREATES); Mark Podolskij (Aarhus University and CREATES); Nopporn Thamrongrat (Heidelberg University); Bezirgen Veliyev (Aarhus University and CREATES)
    Abstract: In this paper, we show how to estimate the asymptotic (conditional) covariance matrix, which appears in many central limit theorems in high-frequency estimation of asset return volatility. We provide a recipe for the estimation of this matrix by subsampling, an approach that computes rescaled copies of the original statistic based on local stretches of high-frequency data, and then it studies the sampling variation of these. We show that our estimator is consistent both in frictionless markets and models with additive microstructure noise. We derive a rate of convergence for it and are also able to determine an optimal rate for its tuning parameters (e.g., the number of subsamples). Subsampling does not require an extra set of estimators to do inference, which renders it trivial to implement. As a variance-covariance matrix estimator, it has the attractive feature that it is positive semi-definite by construction. Moreover, the subsampler is to some extent automatic, as it does not exploit explicit knowledge about the structure of the asymptotic covariance. It therefore tends to adapt to the problem at hand and be robust against misspecification of the noise process. As such, this paper facilitates assessment of the sampling errors inherent in high-frequency estimation of volatility. We highlight the finite sample properties of the subsampler in a Monte Carlo study, while some initial empirical work demonstrates its use to draw feasible inference about volatility in financial markets.
    Keywords: bipower variation, high-frequency data, microstructure noise, positive semi-definite estimation, pre-averaging, stochastic volatility, subsampling.
    JEL: C10 C80
    Date: 2015–08–30
    URL: http://d.repec.org/n?u=RePEc:aah:create:2015-45&r=all
  5. By: Ricardo Quineche (Banco Central de la Reserva del Perú); Gabriel Rodríguez (Departamento de Economía de la PUC del Perú)
    Abstract: We analyze the choice of the truncation lag for unit root tests as the ADF(GLS) and the M(GLS) tests proposed by Elliott et al. (1996) and Ng and Perron (2001) and extended to the context of structural change by Perron and RodrÌguez (2003). We consider the models that allows for a change in slope and a change in the intercept and slope at unknown break date, respectively. Using Monte-Carlo experiments, the truncation lag selected according to several methods as the AIC, BIC, M(AIC), MBIC is analyzed. We also include and analyze the performance of the hybrid version suggested by Perron and Qu (2007) which uses OLS instead of GLS detrended data when constructing the information criteria. All these methods are compared to the sequential t-sig method based on testing for the signiÖcance of coe¢ cients on additional lags in the ADF autoregression. Results show that the MGLS tests present explosive values associated with large values of the lag selected which happens more often when AIC, AIC(OLS) and t-sig are used to select the lag length. The values are so negative that imply an over rejection of the null hypothesis of a unit root. On the opposite side, lag length selected using M(AIC), M(AICOLS), M(BIC), M(BICOLS) methods lead to very small values of the M-tests implying very conservative results, that is, no rejection of the null hypothesis. These opposite power problems are not observed in the case of the ADF(GLS) test for which it is highly recommended. JEL Classification-JEL: C22, C52
    Keywords: Unit Root Tests, Structural Change, Truncation Lag, GLS Detrending, Information Criteria, Sequential General to SpeciÖc t-sig Method.
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00404&r=all
  6. By: Rasmus Søndergaard Pedersen (Department of Economics, University of Copenhagen)
    Abstract: We consider inference and testing in extended constant conditional correlation GARCH models in the case where the true parameter vector is a boundary point of the parameter space. This is of particular importance when testing for volatility spillovers in the model. The large-sample properties of the QMLE are derived together with the limiting distributions of the related LR, Wald, and LM statistics. Due to the boundary problem, these large-sample properties become nonstandard. The size and power properties of the tests are investigated in a simulation study. As an empirical illustration we test for (no) volatility spillovers between foreign exchange rates.
    Keywords: ECCC-GARCH, QML, boundary, spillovers
    JEL: C32 C51 C58
    Date: 2015–09–04
    URL: http://d.repec.org/n?u=RePEc:kud:kuiedp:1510&r=all
  7. By: Fady Barsoum (Department of Economics, University of Konstanz, Germany)
    Abstract: This paper compares the forecasting performance of the unrestricted mixed-frequency VAR (MF-VAR) model to the more commonly used VAR (LF-VAR) model sampled a common low-frequency. The literature so far has successfully documented the forecast gains that can be obtained from using high-frequency variables in forecasting a lower frequency variable in a univariate mixed-frequency setting. These forecast gains are usually attributed to the ability of the mixed-frequency models to nowcast. More recently, Ghysels (2014) provides an approach that allows the usage of mixed-frequency variables in a VAR framework. In this paper we assess the forecasting and nowcasting performance of the MF-VAR of Ghysels (2014), however, we do not impose any restrictions on the parameters of the models. Although the unrestricted version is more flexible, it suffers from parameter proliferation and is therefore only suitable when the difference between the low- and high-frequency variables is small (i.e. quarterly and monthly frequencies). Unlike previous work, our interest is not only limited to evaluating the out-of-sample performance in terms of point forecasts but also density forecasts. Thus, we suggest a parametric bootstrap approach as well as a Bayesian approach to compute density forecasts. Moreover, we show how the nowcasts can be obtained using both direct and iterative forecasting methods. We use both Monte Carlo simulation experiments and an empirical study for the US to compare the forecasting performance of both the MF-VAR model and the LF-VAR model. The results highlight the point and density forecasts gains that can be achieved by the MF-VAR model.
    Keywords: Mixed-frequency, Bayesian estimation, Bootstrapping, Density forecasts, Nowcasting
    JEL: C32 C53 E37
    Date: 2015–09–25
    URL: http://d.repec.org/n?u=RePEc:knz:dpteco:1519&r=all
  8. By: Zhu, Ke
    Abstract: This paper proposes some novel Hausman tests to examine the error distribution in conditionally heteroskedastic models. Unlike the existing tests, all Hausman tests are easy-to-implement with the limiting null distribution of $\chi^{2}$, and moreover, they are consistent and able to detect the local alternative of order n−1=2. The scope of the Hausman test covers all Generalized error distributions and Student’s t distributions. The performance of each Hausman test is assessed by simulated and real data sets.
    Keywords: Conditionally heteroskedastic model; Consistent test; GARCH model; Goodness-of-fit test; Hausman test; Nonlinear time series.
    JEL: C1 C12
    Date: 2015–09–30
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:66991&r=all
  9. By: Guillermo Carlomagno; Antoni Espasa
    Abstract: The objective of this paper is to model all the N components of a macro or business variable. Our contribution concerns cases with a large number (hundreds) of components, for which multivariate approaches are not feasible. We extend in several directions the pairwise approach originally proposed by Espasa and Mayo-Burgos (2013) and study its statistical properties. The pairwise approach consists on performing common features tests between the N(N-1)/2 pairs of series that exist in the aggregate. Once this is done, groups of series that share common features can be formed. Next, all the components are forecast using single equation models that include the restrictions derived by the common features. In this paper we focus on discovering groups of components that share single common trends. We study analytically the asymptotic properties of the procedure. We also carry out a comparison with a DFM alternative; results indicate that the pairwise approach dominates in many empirically relevant situations. A clear advantage of the pairwise approach is that it does not need common features to be pervasive.
    Keywords: Cointegration , Factor Models , Disaggregation , Pairwise tests
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws1519&r=all
  10. By: Trenkler, Carsten; Weber, Enzo
    Abstract: This paper analyses identification for multivariate unobserved components models in which the innovations to trend and cycle are correlated. We address order and rank criteria as well as potential non-uniqueness of the reduced-form VARMA model. Identification is shown for lag lengths larger than one in case of a diagonal vector autoregressive cycle. We also discuss UC models with common features and with cycles that allow for dynamic spillovers.
    Keywords: Unobserved components models , Identification , VARMA
    JEL: C32 E32
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:mnh:wpaper:39656&r=all
  11. By: Daniel J. Graham
    Abstract: This paper reviews methods that seek to draw causal inference from non-experimental data and shows how they can be applied to undertake ex-post evaluation of transport interventions. In particular, the paper discusses the underlying principles of techniques for treatment effect estimation with non-randomly assigned treatments. The aim of these techniques is to quantify changes that have occurred due to explicit intervention (or ‘treatment’). The paper argues that transport interventions are typically characterized by non-random assignment and that the key issues for successful ex-post evaluation involve identifying and adjusting for confounding factors. In contrast to conventional approaches for ex-ante appraisal, a major advantage of the statistical causal methods is that they can be applied without making strong a-priori theoretical assumptions. The paper provides empirical examples of the use of causal techniques to evaluate road network capacity expansions in US cities and High Speed Rail investments in Spain.
    Date: 2014–10
    URL: http://d.repec.org/n?u=RePEc:oec:itfaab:2014/13-en&r=all
  12. By: Peter A. Bebbington; Reimer Kuehn
    Abstract: Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz' mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows to find an optimal trading strategy which - for a given return - is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1509.07953&r=all
  13. By: Luke Hartigan (School of Economics, UNSW Business School, UNSW)
    Abstract: The factor structure of the U.S. economy appears to change over time. Unlike previous studies which suggest this is due to permanent structural breaks in factor loadings, I argue instead that the volatility and persistence of factor processes undergo recurring changes related to the business cycle. To capture this, I develop a two-step Markov-switching static factor estimation procedure and apply it to a well-studied U.S. macroeconomic data set. I find strong support for Markov-switching in the factors processes, with switching variances being most dominant. Conditional on Markov-switching factor processes, tests for regime-dependent factor loadings show only moderate evidence of change. Overall, the results support regime-dependent factor processes as the main explanation for the diverging number of estimated factors in empirical applications and challenge the global linearity assumption implicit in large dimensional factor models of the U.S. economy.
    Keywords: Approximate Factor Model, Large Data Sets, Markov Switching Model, Structural Breaks
    JEL: C32 C38 C51 E32
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:swe:wpaper:2015-17&r=all
  14. By: John Mullahy
    Abstract: Models having multivariate probit and related structures arise often in applied health economics. When the outcome dimensions of such models are large, however, estimation can be challenging owing to numerical computation constraints and/or speed. This paper suggests the utility of estimating multivariate probit (MVP) models using a chain of bivariate probit estimators. The proposed approach offers two potential advantages over standard multivariate probit estimation procedures: significant reductions in computation time; and essentially unlimited dimensionality of the outcome set. The time savings arise because the proposed approach does not rely simulation methods; the dimension advantage arises because only pairs of outcomes are considered at each estimation stage. Importantly, the proposed approach provides a consistent estimator of all the MVP model's parameters under the same assumptions required for consistent estimation based on standard methods, and simulation exercises suggest no loss of estimator precision.
    JEL: C3 I1
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:21593&r=all
  15. By: Matyas Barczy; Mohamed Ben Alaya; Ahmed Kebaier; Gyula Pap
    Abstract: We study asymptotic properties of maximum likelihood estimators of drift parameters for a jump-type Heston model based on continuous time observations of the price process together with its jump part. We prove strong consistency and asymptotic normality for all admissible parameter values except one, where we show only weak consistency and non-normal asymptotic behavior. We also present some simulations to illustrate our results.
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1509.08869&r=all
  16. By: Yacine Aït-Sahalia; Dacheng Xiu
    Abstract: We develop the necessary methodology to conduct principal component analysis at high frequency. We construct estimators of realized eigenvalues, eigenvectors, and principal components and provide the asymptotic distribution of these estimators. Empirically, we study the high frequency covariance structure of the constituents of the S&P 100 Index using as little as one week of high frequency data at a time. The explanatory power of the high frequency principal components varies over time. During the recent financial crisis, the first principal component becomes increasingly dominant, explaining up to 60% of the variation on its own, while the second principal component drives the common variation of financial sector stocks.
    JEL: C22 C58 G01
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:21584&r=all
  17. By: Paolo Giudici (Department of Economics and Management, University of Pavia); Laura Parisi (Department of Economics and Management, University of Pavia)
    Abstract: Monetary policies, either actual or perceived, cause changes in monetary interest rates. These changes impact the economy through financial institutions, which react to changes in the monetary rates with changes in their administered rates, on both deposits and lendings. The dynamics of administered bank interest rates in response to changes in money market rates is essential to examine the impact of monetary policies on the economy. Chong et al. (2006) proposed an error correction model to study such impact, using data previous to the recent financial crisis. Parisi et al. (2015) analyzed the Chong error correction model, extended it and proposed an alternative, simpler to interpret, one-equation model, and applied it to the recent time period, characterized by close-to-zero monetary rates. In this paper we extend the previous models in a dynamic sense, modelling monetary transmission effects by means of stochastic processes.The main contribution of this work consists in novel parsimonious models that provide endogenously determined and generalizable models. Secondly, this paper introduces a predictive performance assessment methodology, which allows to compare all the proposed models on a fair ground. From an applied viewpoint, the paper applies the proposed models to different interest rates on loans, showing how the monetary policy differentially impacts different types of lendings.
    Keywords: Error Correction Forecasting Bank Rates, Monte Carlo predictions, Stochastic Processes.
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0106&r=all
  18. By: Guillermo Carlomagno; Antoni Espasa
    Abstract: This paper deals with macro variables which have a large number of components and our aim is to model and forecasts all of them. We adopt a basic statistical procedure for discovering common trends among a large set of series and propose some extensions to take into account data irregularities and small samples issues. The forecasting strategy consists on estimating single-equation models for all the components, including the restrictions derived from the existence of common trends. An application to the disaggregated US CPI shows the usefulness of the procedure in real data problems.
    Keywords: Cointegration , Pairwise testing , Disaggregation , Forecast model selection , Outliers treatment , Inflation
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws1518&r=all
  19. By: Gabor Bekes (Institute of Economics, Centre for Economic and Regional Studies, Hungarian Academy of Sciences and CEPR); Peter Harasztosi (National Bank of Hungary)
    Abstract: This paper proposes a simple method measuring spatial robustness of estimated coefficients and considers the role of administrative districts and regions' size. The procedure, dubbed "Grid and Shake", offers a solution for a practical empirical issue, when one compares a variables of interest across spatially aggregated units, such as regions. It may, for instance, be applied to investigate competition, agglomeration, spillover effects. The method offers to (i) have carry out estimations at various levels of aggregation and compare evidence, (ii) treat uneven and non-random distribution of administrative unit size, (iii) have the ability to compare results on administrative and artificial units, and (iv) be able to gouge statistical significance of differences. To illustrate the method, we use Hungarian data and compare estimates of agglomeration externalities at various levels of aggregation. We find that differences among estimated elasticities found at various levels of aggregation are broadly in the same range as those found in the literature employing various estimation method. Hence, the method of spatial aggregation seems to be of equal importance to modeling and econometric specification of the estimation.
    Keywords: economic geography, firm productivity, agglomeration premium, spatial grid randomization
    JEL: R12 R30 C15
    Date: 2015–06
    URL: http://d.repec.org/n?u=RePEc:has:discpr:1526&r=all
  20. By: Charles F. Manski; Aleksey Tetenov
    Abstract: Medical research has evolved conventions for choosing sample size in randomized clinical trials that rest on the theory of hypothesis testing. Bayesians have argued that trials should be designed to maximize subjective expected utility in settings of clinical interest. This perspective is compelling given a credible prior distribution on treatment response, but Bayesians have struggled to provide guidance on specification of priors. We use the frequentist statistical decision theory of Wald (1950) to study design of trials under ambiguity. We show that epsilon-optimal rules exist when trials have large enough sample size. An epsilon-optimal rule has expected welfare within epsilon of the welfare of the best treatment in every state of nature. Equivalently, it has maximum regret no larger than epsilon. We consider trials that draw predetermined numbers of subjects at random within groups stratified by covariates and treatments. The principal analytical findings are simple sufficient conditions on sample sizes that ensure existence of epsilon-optimal treatment rules when outcomes are bounded. These conditions are obtained by application of Hoeffding (1963) large deviations inequalities to evaluate the performance of empirical success rules.
    JEL: C90
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:cca:wpaper:430&r=all
  21. By: Mikkel Bennedsen (Aarhus University and CREATES)
    Abstract: We introduce a new mathematical model of electricity spot prices which accounts for the most important stylized facts of these time series: seasonality, spikes, stochastic volatility and mean reversion. Empirical studies have found a possible fifth stylized fact, fractality, and our approach explicitly incorporates this into the model of the prices. Our setup generalizes the popular Ornstein Uhlenbeck-based multi-factor framework of Benth et al. (2007) and allows us to perform statistical tests to distinguish between an Ornstein Uhlenbeck-based model and a fractal model. Further, through the multi-factor approach we account for seasonality and spikes before estimating - and making inference on - the degree of fractality. This is novel in the literature and we present simulation evidence showing that these precautions are crucial to accurate estimation. Lastly, we estimate our model on recent data from six European energy exchanges and we find statistical evidence of fractality in five out of six markets. As an application of our model, we show how, in these five markets, a fractal component improves short term forecasting of the prices.
    Keywords: Energy markets, electricity prices, roughness, fractals, mean reversion, multi-factor modelling, forecasting.
    JEL: C22 C51 C52 C53 Q41
    Date: 2015–09–18
    URL: http://d.repec.org/n?u=RePEc:aah:create:2015-42&r=all
  22. By: Hautsch, Nikolaus; Herrera, Rodrigo
    Abstract: We propose a multivariate dynamic intensity peaks-over-threshold model to capture extreme events in a multivariate time series of returns. The random occurrence of extreme events exceeding a threshold is modeled by means of a multivariate dynamic intensity model allowing for feedback effects between the individual processes. We propose alternative specifications of the multivariate intensity process using autoregressive conditional intensity and Hawkes-type specifications. Likewise, temporal clustering of the size of exceedances is captured by an autoregressive multiplicative error model based on a generalized Pareto distribution. We allow for spillovers between both the intensity processes and the process of marks. The model is applied to jointly model extreme returns in the daily returns of three major stock indexes. We find strong empirical support for a temporal clustering of both the occurrence of extremes and the size of exceedances. Moreover, significant feedback effects between both types of processes are observed. Backtesting Value-at-Risk (VaR) and Expected Shortfall (ES) forecasts show that the proposed model does not only produce a good in-sample fit but also reliable out-of-sample predictions. We show that the inclusion of temporal clustering of the size of exceedances and feedback with the intensity thereof results in better forecasts of VaR and ES.
    Keywords: Extreme value theory,Value-at-Risk,Expected shortfall,Self-exciting point process,Conditional intensity
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:zbw:cfswop:516&r=all
  23. By: Roger Klein (Rutgers University); Chan Shen (UT MD Anderson)
    Abstract: We propose instrumental variable(IV) estimators for quantile marginal effects and the parameters upon which they depend in a semiparametric outcome model with endogenous discrete treatment variables. We prove identification, consistency, and asymptotic normality of the estimators. We also show that they are efficient under correct model specification. Further, we show that they are robust to misspecification of the treatment model in that consistency and asymptotic normality continue to hold in this case. In the Monte Carlo study, the estimators perform well over diverse designs covering both correct and incorrect treatment model specifications.
    Keywords: semiparametric, IV, marginal effects, efficiency, robustness
    JEL: C14 C16
    Date: 2015–09–17
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201511&r=all
  24. By: Schober, Dominik; Woll, Oliver
    Abstract: Cycles play an important role when analyzing market phenomena. In many markets, both overlaying (weekly, seasonal or business cycles) and time-varying cycles (e.g. asymmetric lengths of peak and off peak or variation of business cycle length) exist simultaneously. Identification of these market cycles is crucial and no standard detection procedure exists to disentangle them. We introduce and investigate an adaptation of an endogenous structural break test for detecting at the same time simultaneously overlaying as well as time-varying cycles. This is useful for growth or business cycle analysis as well as for analysis of complex strategic behavior and short-term dynamics.
    Keywords: structural breaks,cluster analysis,filter,rolling regression,change points,model selection,cycles,economic dynamics
    JEL: C22 C24 C29 O47 L50
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:zbw:zewdip:15067&r=all
  25. By: Eo, Yunjong
    Abstract: I make inferences about complicated patterns of structural breaks in inflation dynamics. I extend Chib's (1998) approach by allowing multiple parameters such as the unconditional mean, a group of persistence parameters, and/or the residual variance to undergo mutually independent structural breaks at different dates with the different number of breaks. Structural breaks are modeled as abrupt changes to identify potential regime shifts in economic structure such as a long-run inflation target, monetary policy, and price-setting behavior. I consider postwar quarterly U.S. inflation rates based on the CPI and the GDP deflator over the period from 1953:Q1 to 2013:Q4. I find that two inflation measures had distinct structural changes in different parameters as well as at different dates using Bayesian model selection procedures. CPI inflation experienced a dramatic drop in persistence around the early 1980s, but GDP deflator inflation is still persistent. In addition, the residual variance for both inflation measures switched from a low volatility regime to a high volatility regime in the early 1970s, but it returned to another low volatility regime at different dates: the early 1980s for GDP deflator inflation and the early 1990s for CPI inflation. The residual variance for CPI inflation has increased again since the early 2000s, while GDP deflator inflation has remained less volatile. These volatility shifts are confirmed by the empirical results based on the unobserved components model with stochastic volatility. However, I do not find evidence of a structural shift in the unconditional mean. When reviewing the recent literature, considerable controversy exists over the structural break in inflation persistence around the early 1980s but this appears to be dependent on the measures of inflation, as highlighted by the empirical findings in this paper.
    Keywords: Bayesian Analysis; Structural Breaks; Multiple-Group Changepoint; Inflation Dynamics; Persistence; UC-SV Model
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:syd:wpaper:2015-18&r=all

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.