|
on Econometrics |
By: | Jönsson, Kristian (Department of Economics, Lund University) |
Abstract: | In this paper, we study the size distortions of the KPSS test for stationarity when serial correlation is present and samples are small and medium-sized. It is argued that two distinct sources of the size distortions can be identified. The first source is the finite-sample distribution of the long-run variance estimator used in the KPSS test, while the second source of the size distortions is the serial correlation not captured by the long-run variance estimator due to a too narrow choice of truncation lag parameter. When the relative importance of the two sources is studied, it is found that the size of the KPSS test can be reasonably well controlled if the finite-sample distribution of the KPSS test statistic, conditional on the time-series dimension and the truncation lag parameter, is found. Hence, finite-sample critical values, that can be applied in order to reduce the size distortions of the KPSS test, are supplied. When the power of the test is studied, it is found that the price paid for the increased size control is a lower power against a non-stationary alternative hypothesis. |
Keywords: | Stationarity Testing; Unit Root; Finite-Sample Inference; Long-Run Variance; Monte Carlo Simulation; Permanent Income Hypothesis; Private Consumption |
JEL: | C12 C14 C15 C22 E21 |
Date: | 2006–10–30 |
URL: | http://d.repec.org/n?u=RePEc:hhs:lunewp:2006_020&r=ecm |
By: | Oleg Glouchakov (Department of Economics, York University) |
Abstract: | This paper describes how to estimate the alternative model that admits a one-time break in coeffcients of a linear regression function and variances of the errors. Bai and Perron (1998) introduced an estimation and testing procedure for multiple breaks in regression coeffcients. We limit the number of breaks to one but extend the estimation to the alternative model that allows for variances of the errors to break too. The method is based on application of specific objective functions in conjunction with the tests ofstructural change. In particular, sup-Wald test of Andrews (1993) can be used to detect structural breaks. Andrews and Ploberger (1994) introduce optimal tests of constancy of model parameters when the change point is unknown. However, these tests lose their power optimality properties when the break happens in both the mean and the variance of a process. For such an alternative we introduce a statistic with a modified measure of a distance between model parameters before and after the break. In a Monte-Carlo experiment we show that the power of the corresponding sup-test dominates that of the sup-Wald test. If a change point is known, then the test based on this statistic is uniformly more powerful than the Wald test. |
Keywords: | Change point estimation, asymptotic distribution, Wald statistic, parameter instability,structural change |
JEL: | C0 C1 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:yca:wpaper:2006_3&r=ecm |
By: | Frank Gerhard (Barclays Capital, London); Nikolaus Hautsch (Department of Economics, University of Copenhagen) |
Abstract: | This paper proposes a dynamic proportional hazard (PH) model with non-specified baseline hazard for the modelling of autoregressive duration processes. A categorization of the durations allows us to reformulate the PH model as an ordered response model based on extreme value distributed errors. In order to capture persistent serial dependence in the duration process, we extend the model by an observation driven ARMA dynamic based on generalized errors. We illustrate the maximum likelihood estimation of both the model parameters and discrete points of the underlying unspecified baseline survivor function. The dynamic properties of the model as well as an assessment of the estimation quality is investigated in a Monte Carlo study. It is illustrated that the model is a useful approach to estimate conditional failure probabilities based on (persistent) serial dependent duration data which might be subject to censoring structures. In an empirical study based on financial transaction data we present an application of the model to estimate conditional asset price change probabilities. Evaluating the forecasting properties of the model, it is shown that the proposed approach is a promising competitor to well-established ACD type models. |
Keywords: | autoregressive duration models; dynamic ordered response models; generalized residuals; censoring |
JEL: | C22 C25 C41 G14 |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:kud:kuiefr:200605&r=ecm |
By: | Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, Syracuse NY 13244-1020); Lorenzo Trapani (Cass Business School and Bergamo University); Giovanni Urga (Cass Business School) |
Abstract: | This paper develops a novel asymptotic theory for panel models with common shocks. We assume that contemporaneous correlation can be generated by both the presence of common regressors among units and weak spatial dependence among the error terms. Several characteristics of the panel are considered: cross sectional and time series dimensions can either be fixed or large; factors can either be observable or unobservable; the factor model can describe either cointegration relationship or a spurious regression, and we also consider the stationary case. We derive the rate of convergence and the distribution limits for the ordinary least squares (OLS) estimates of the model parameters under all the aforementioned cases. |
Keywords: | cross-sectional dependence; common shocks; nonstationary panel |
JEL: | C13 C23 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:max:cprwps:77&r=ecm |
By: | Bunzel, Helle; Iglesias, Emma M |
Abstract: | This paper proposes several new tests for structural change in the multivariate linear regression model. The current state of the art is Sup-Wald type tests along the lines of Bai, Lumsdaine and Stock (1998), which Bernard, Idoudi, Khalaf and Yélou (2006) show to have very large size distortions, especially for high dimensional systems. They propose the use of Monte Carlo type tests to control for size in finite samples. In this paper we propose several procedures that find a balance between the two previous approaches. We first estimate the break point using alternating observations, and then use the estimated breakpoint to create a test statistic either with the whole sample or with the observations not used for the breakpoint estimation. For the latter approach, it is then possible to use Monte Carlo methods to control size. In contrast to the Sup-Wald type tests, which have non-standard asymptotic distributions, we show that our test are asymptotically distributed Chi-square using methods similar to those in Andrews (2004). Additionally, our tests stay asymptotically valid even when the distributional assumption made for the Monte Carlo adjustments is incorrect. We illustrate the new test statistics in the univariate context of discount rates and changes in the interest rates, and also in the multivariate setting of the Capital Asset Pricing Model. |
Keywords: | structural stability; structural change; multivariate linear regression model; breaks; Monte Carlo test; CAPM; discount rate; |
Date: | 2006–11–09 |
URL: | http://d.repec.org/n?u=RePEc:isu:genres:12694&r=ecm |
By: | Eiji Kurozumi; Yoichi Arai |
Abstract: | This paper considers a single equation cointegrating model and proposes the locally best invariant and unbiased (LBIU) test for the null hypothesis of cointegration. We derive the asymptotic local power functions and compare them with the standard residualbased test, and we show that the LBIU test is more powerful in a wide range of local alternatives. Then, we conduct a Monte Carlo simulation to investigate the nite sample properties of the tests and show that the LBIU test outperforms the residual-based test in terms of both size and power. The advantage of the LBIU test is particularly patent when the error is highly autocorrelated. Further, we point out that nite sample performance of existing tests is largely affected by the initial value condition while our tests are immune to it. We propose a simple transformation of data that resolves the problem in the existing tests. |
Keywords: | Cointegration, locally best test, point optimal test |
JEL: | C12 C22 |
Date: | 2006–11 |
URL: | http://d.repec.org/n?u=RePEc:hst:hstdps:d06-190&r=ecm |
By: | Todd E. Clark; Michael W. McCracken |
Abstract: | A body of recent work suggests commonly–used VAR models of output, inflation, and interest rates may be prone to instabilities. In the face of such instabilities, a variety of estimation or forecasting methods might be used to improve the accuracy of forecasts from a VAR. These methods include using different approaches to lag selection, different observation windows for estimation, (over-) differencing, intercept correction, stochastically time–varying parameters, break dating, discounted least squares, Bayesian shrinkage, and detrending of inflation and interest rates. Although each individual method could be useful, the uncertainty inherent in any single representation of instability could mean that combining forecasts from the entire range of VAR estimates will further improve forecast accuracy. Focusing on models of U.S. output, prices, and interest rates, this paper examines the effectiveness of combination in improving VAR forecasts made with real–time data. The combinations include simple averages, medians, trimmed means, and a number of weighted combinations, based on: Bates-Granger regressions, factor model estimates, regressions involving just forecast quartiles, Bayesian model averaging, and predictive least squares–based weighting. Our goal is to identify those approaches that, in real time, yield the most accurate forecasts of these variables. We use forecasts from simple univariate time series models and the Survey of Professional Forecasters as benchmarks. |
Keywords: | Economic forecasting ; Vector autoregression |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp06-12&r=ecm |
By: | Roger Klein (Rutgers University); Francis Vella (Georgetown University and IZA Bonn) |
Abstract: | This paper formulates a likelihood-based estimator for a double index, semiparametric binary response equation. A novel feature of this estimator is that it is based on density estimation under local smoothing. While the proofs differ from those based on alternative density estimators, the finite sample performance of the estimator is significantly improved. As binary responses often appear as endogenous regressors in continuous outcome equations, we also develop an optimal instrumental variables estimator in this context. For this purpose, we specialize the double index model for binary response to one with heteroscedasticity that depends on an index different from that underlying the “mean-response”. We show that such (multiplicative) heteroscedasticity, whose form is not parametrically specified, effectively induces exclusion restrictions on the outcomes equation. The estimator developed below exploits such identifying information. We provide simulation evidence on the favorable performance of the estimators and illustrate their use through an empirical application on the determinants, and affect, of attendance at a government financed school. |
Keywords: | binary response, semiparametric, heteroscedasticity, endogenous treatment |
JEL: | C35 C14 |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp2383&r=ecm |
By: | Matteo Pelagatti |
Abstract: | Duration dependent Markov-switching VAR (DDMS-VAR) models are time series models with data generating process consisting in a mixture of two VAR processes. The switching between the two VAR processes is governed by a two state Markov chain with transition probabilities that depend on how long the chain has been in a state. In the present paper we analyze the second order properties of such models and propose a Markov chain Monte Carlo algorithm to carry out Bayesian inference on the model’s unknowns. Furthermore, a freeware software written by the author for the analysis of time series by means of DDMS-VAR models is illustrated. The methodology and the software are applied to the analysis of the U.S. business cycle. |
Keywords: | Markov-switching, business cycle, Gibbs sampler, duration dependence, vector autoregression |
JEL: | C11 C15 C32 C41 E32 |
Date: | 2003–08 |
URL: | http://d.repec.org/n?u=RePEc:mis:wpaper:20051101&r=ecm |
By: | Luc Bauwens; Jeroen V.K. Rombouts (IEA, HEC Montréal) |
Abstract: | We estimate by Bayesian inference the mixed conditional heteroskedasticity model of (Haas, Mittnik, and Paolella 2004a). We construct a Gibbs sampler algorithm to compute posterior and predictive densities. The number of mixture components is selected by the marginal likelihood criterion. We apply the model to the SP500 daily returns. |
Keywords: | Finite mixture, ML estimation, bayesian inference, value at risk. |
JEL: | C11 C15 C32 |
Date: | 2006–06 |
URL: | http://d.repec.org/n?u=RePEc:iea:carech:0607&r=ecm |
By: | Martina Nardon (Department of Applied Mathematics, University of Venice); Paolo Pianca (Department of Applied Mathematics, University of Venice) |
Abstract: | This contribution deals with Monte Carlo simulation of generalized Gaussian random variables. Such a parametric family of distributions has been proposed in many applications in science to describe physical phenomena and in engineering, and it seems also useful in modeling economic and financial data. For values of the shape parameter a within a certain range, the distribution presents heavy tails. In particular, the cases a=1/3 and a=1/2 are considered. For such values of the shape parameter, different simulation methods are assessed. |
Keywords: | Generalized Gaussian density, heavy tails, transformations of rendom variables, Monte Carlo simulation, Lambert W function |
JEL: | C15 C16 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:vnm:wpaper:145&r=ecm |
By: | Joan Jasiak (Department of Economics, York University); C. Gourieroux (CREST, CEPREMAP, University of Toronto) |
Abstract: | This paper introduces new dynamic quantile models called the Dynamic Additive Quantile (DAQ) model and Quantile Factor Model (QFM) for univariate time series and panel data, respectively. The Dynamic Additive Quantile (DAQ) model is suitable for applications to financial data such as univariate returns, and can be used for computation and updating of the Value-at-Risk. The Quantile Factor Mode (QFM) is a multivariate model that can represent the dynamics of cross-sectional distributions of returns, individual incomes, and corporate ratings. The estimation method proposed in the paper relies on an optimization criterion based on the inverse KLIC measure. Goodness of fit tests and diagnostic tools for fit assessment are also provided. For illustration, the models are estimated on stock return data form the Toronto Stock Exchange (TSX). |
Keywords: | Value-at-Risk, Factor Model, Information Criterion, Income Inequality, Panel Data, Loss-Given-Default |
JEL: | C10 |
Date: | 2006–09 |
URL: | http://d.repec.org/n?u=RePEc:yca:wpaper:2006_4&r=ecm |
By: | Fabrizio Cipollini; Robert F. Engle; Giampiero M. Gallo |
Abstract: | The Multiplicative Error Model introduced by Engle (2002) for positive valued processes is specified as the product of a (conditionally autoregressive) scale factor and an innovation process with positive support. In this paper we propose a multi-variate extension of such a model, by taking into consideration the possibility that the vector innovation process be contemporaneously correlated. The estimation procedure is hindered by the lack of probability density functions for multivariate positive valued random variables. We suggest the use of copulafunctions and of estimating equations to jointly estimate the parameters of the scale factors and of the correlations of the innovation processes. Empirical applications on volatility indicators are used to illustrate the gains over the equation by equation procedure. |
Date: | 2006–11 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberte:0331&r=ecm |
By: | David E. A. Giles (Department of Economics, University of Victoria) |
Abstract: | We show that the full asymptotic distribution for Watson’s statistic, modified for discrete data, can be computed by standard methods. Previous approximate percentiles for the uniform multinomial case are found to be accurate. More extensive percentiles are presented for this distribution, and for the distribution associated with “Benford’s Law”. |
Keywords: | Distributions on the Circle, Goodness-of-fit, Watson's UN-squared, Discrete data, Benford's Law |
JEL: | C12 C16 |
Date: | 2006–11–17 |
URL: | http://d.repec.org/n?u=RePEc:vic:vicewp:0607&r=ecm |
By: | Enzo Giacomini; Wolfgang Härdle; Ekaterina Ignatieva; Vladimir Spokoiny |
Abstract: | Measuring dependence in a multivariate time series is tantamount to modelling its dynamic structure in space and time. In the context of a multivariate normally distributed time series, the evolution of the covariance (or correlation) matrix over time describes this dynamic. A wide variety of applications, though, requires a modelling framework different from the multivariate normal. In risk management the non-normal behaviour of most financial time series calls for nonlinear (i.e. non-gaussian) dependency. The correct modelling of non-gaussian dependencies is therefore a key issue in the analysis of multivariate time series. In this paper we use copulae functions with adaptively estimated time varying parameters for modelling the distribution of returns, free from the usual normality assumptions. Further, we apply copulae to estimation of Value-at-Risk (VaR) of a portfolio and show its better performance over the RiskMetrics approach, a widely used methodology for VaR estimation. |
Keywords: | Value-at-Risk, time varying copula, adaptive estimation, nonparametric estimation. |
JEL: | C14 |
Date: | 2006–11 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2006-075&r=ecm |
By: | Oya Celasun; Joong Shik Kang |
Abstract: | This paper evaluates the bias of the least-squares-with-dummy-variables (LSDV) method in fiscal reaction function estimations. A growing number of studies estimate fiscal policy reaction functions-that is, relationships between the primary fiscal balance and its determinants, including public debt and the output gap. A previously unexplored methodological issue in these estimations is that lagged debt is not a strictly exogenous variable, which biases the LSDV estimator in short panels. We derive the bias analytically to understand its determinants and run Monte Carlo simulations to assess its likely size in empirical work. We find the bias to be smaller than the bias of the LSDV estimator in a comparable autoregressive dynamic panel model and show the LSDV method to outperform a number of alternatives in estimating fiscal reaction functions. |
Keywords: | Fiscal reaction functions , panel data , dynamic models , |
Date: | 2006–08–07 |
URL: | http://d.repec.org/n?u=RePEc:imf:imfwpa:06/182&r=ecm |
By: | Vanessa Berenguer-Rico; Josep Lluis Carrion-i-Silvestre (Universitat de Barcelona) |
Abstract: | The paper addresses the concept of multicointegration in panel data frame- work. The proposal builds upon the panel data cointegration procedures developed in Pedroni (2004), for which we compute the moments of the parametric statistics. When individuals are either cross-section independent or cross-section dependence can be re- moved by cross-section demeaning, our approach can be applied to the wider framework of mixed I(2) and I(1) stochastic processes analysis. |
Keywords: | common factors, cross-section dependence, crossmulticointegration, i(2) processes, multicointegration, panel data |
JEL: | C12 C22 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:bar:bedcje:2006160&r=ecm |
By: | Ying Chen; Wolfgang Härdle; Vladimir Spokoiny |
Abstract: | Over recent years, study on risk management has been prompted by the Basel committee for regular banking supervisory. There are however limitations of some widely-used risk management methods that either calculate risk measures under the Gaussian distributional assumption or involve numerical difficulty. The primary aim of this paper is to present a realistic and fast method, GHICA, which overcomes the limitations in multivariate risk analysis. The idea is to first retrieve independent components (ICs) out of the observed high-dimensional time series and then individually and adaptively fit the resulting ICs in the generalized hyperbolic (GH) distributional framework. For the volatility estimation of each IC, the local exponential smoothing technique is used to achieve the best possible accuracy of estimation. Finally, the fast Fourier transformation technique is used to approximate the density of the portfolio returns. The proposed GHICA method is applicable to covariance estimation as well. It is compared with the dynamic conditional correlation (DCC) method based on the simulated data with d = 50 GH distributed components. We further implement the GHICA method to calculate risk measures given 20-dimensional German DAX portfolios and a dynamic exchange rate portfolio. Several alternative methods are considered as well to compare the accuracy of calculation with the GHICA one. |
Keywords: | Multivariate Risk Management, Independent Component Analysis, Generalized Hyperbolic Distribution, Local Exponential Estimation, Value at Risk, Expected Shortfall. |
JEL: | C14 C16 C32 C61 G20 |
Date: | 2006–11 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2006-078&r=ecm |
By: | Joan Jasiak (Department of Economics, York University); R. Sufana (University of Toronto); C. Gourieroux (CREST, CEPREMAP, University of Toronto) |
Abstract: | The Wishart Autoregressive (WAR) process is a multivariate process of stochastic positive definite matrices. The WAR is proposed in this paper as a dynamic model for stochastic volatility matrices. It yields simple nonlinear forecasts at any horizon and has factor representation, which separates white noise directions from those that contain all information about the past. For illustration, the WAR is applied to a sequence of intraday realized volatility covolatility matrices. |
Keywords: | Stochastic Volatility, Car Process, Factor Analysis, Reduced Rank, Realized Volatility |
JEL: | G13 C51 |
Date: | 2005–09 |
URL: | http://d.repec.org/n?u=RePEc:yca:wpaper:2005_2&r=ecm |
By: | Albert E. DePrince |
Abstract: | This paper provides a direct test for forecast bias using the Thiel equation. In this test the constant term is simply the difference between the mean of the forecast and the mean of the actual data. A simple data transformation leads to this specification of the constant term. The approach is expanded to include a function with additional independent variables where one is interested in the constant term being simply the difference of the means of the dependent and any one of the independent variables. |
Keywords: | forecast bias, test for bias |
JEL: | C2 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:mts:wpaper:200602&r=ecm |
By: | Sullivan, Paul |
Abstract: | Structural discrete choice dynamic programming models have been shown to be a valuable tool for analyzing a wide range of economic behavior. A major limitation on the complexity and applicability of these models is the computational burden associated with computing the high dimensional integrals that typically characterize an agent's decision rules. This paper develops a regression based approach to interpolating value functions during the solution of dynamic programming models that alleviates this burden. This approach is suitable for use in models that incorporate unobserved state variables that are serially correlated across time and correlated across choices within a time period. The key assumption is that one unobserved state variable, or error term, in the model is distributed extreme value. Additional error terms that allow for correlation between unobservables across time or across choices within a given time period may be freely incorporated in the model. Value functions are simulated at a fraction of the state space and interpolated at the remaining points using a new regression function based on the extreme value closed form solution for the expected maxima of the value function. This regression function is well suited for use in models with large choice sets and complicated error structures. The performance of the interpolation method appears to be excellent, and it greatly reduces the computational burden of estimating the parameters of a dynamic programming model. |
Keywords: | dynamic programming models; interpolation; simulation methods; estimation of dynamic programming models |
JEL: | C50 C15 |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:864&r=ecm |
By: | Marco Del Negro; Frank Schorfheide |
Abstract: | In Bayesian analysis of dynamic stochastic general equilibrium (DSGE) models, prior distributions for some of the taste-and-technology parameters can be obtained from microeconometric or presample evidence, but it is difficult to elicit priors for the parameters that govern the law of motion of unobservable exogenous processes. Moreover, since it is challenging to formulate beliefs about the correlation of parameters, most researchers assume that all model parameters are independent of each other. We provide a simple method of constructing prior distributions for a subset of DSGE model parameters from beliefs about the moments of the endogenous variables. We use our approach to investigate the importance of nominal rigidities and show how the specification of prior distributions affects our assessment of the relative importance of different frictions. |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedawp:2006-16&r=ecm |
By: | Juarez, Miguel A.; Steel, Mark F. J. |
Abstract: | In this paper we propose a model-based method to cluster units within a panel. The underlying model is autoregressive and non-Gaussian, allowing for both skewness and fat tails, and the units are clustered according to their dynamic behaviour and equilibrium level. Inference is addressed from a Bayesian perspective and model comparison is conducted using the formal tool of Bayes factors. Particular attention is paid to prior elicitation and posterior propriety. We suggest priors that require little subjective input from the user and possess hierarchical structures that enhance the robustness of the inference. Two examples illustrate the methodology: one analyses economic growth of OECD countries and the second one investigates employment growth of Spanish manufacturing firms |
Keywords: | autoregressive modelling; employment growth; GDP growth convergence; hierarchical prior; model comparison; posterior propriety; skewness |
JEL: | C23 C11 |
Date: | 2006–11–20 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:880&r=ecm |
By: | Francis Y. Kumah |
Abstract: | Adequate modeling of the seasonal structure of consumer prices is essential for inflation forecasting. This paper suggests a new econometric approach for jointly determining inflation forecasts and monetary policy stances, particularly where seasonal fluctuations of economic activity and prices are pronounced. In an application of the framework, the paper characterizes and investigates the stability of the seasonal pattern of consumer prices in the Kyrgyz Republic and estimates optimal money growth and implied exchange rate paths along with a jointly determined inflation forecast. The approach uses two broad specifications of an augmented error-correction model-with and without seasonal components. Findings from the paper confirm empirical superiority (in terms of information content and contributions to policymaking) of augmented error-correction models of inflation over single-equation, Box-Jenkins-type general autoregressive seasonal models. Simulations of the estimated errorcorrection models yield optimal monetary policy paths for achieving inflation targets and demonstrate the empirical significance of seasonality and monetary policy in inflation forecasting. |
Keywords: | Inflation forecasting , seasonal unit roots , monetary policy stance , erroro-correction models and VAR , Monetary policy , Inflation , Forecasting models , |
Date: | 2006–07–28 |
URL: | http://d.repec.org/n?u=RePEc:imf:imfwpa:06/175&r=ecm |
By: | Refik Soyer (The George Washington University School of Business); Thomas A. Mazzuchi (George Washington University School of Engineering and Applied Science); Ehsan S. Soofi (University of Wisconsin-Milwaukee) |
Date: | 2006–06 |
URL: | http://d.repec.org/n?u=RePEc:gwu:wpaper:0012&r=ecm |
By: | Mishra, SK |
Abstract: | Logarithmic spirals are abundantly observed in nature. Gastropods (such as nautilus, cowie, grove snail, thatcher, etc.) in the mollusca phylum have spiral shells, mostly exhibiting logarithmic spirals vividly. Spider webs show a similar pattern. The low-pressure area over Iceland and the Whirlpool Galaxy resemble logarithmic spirals.Many materials develop spiral cracks either due to imposed torsion (twist), as in the spiral fracture of the tibia, or due to geometric constraints, as in the fracture of pipes. Spiral cracks may, however, arise in situations where no obvious twisting is applied; the symmetry is broken spontaneously. It has been found that the rank size pattern of the cities of USA approximately follows logarithmic spiral. The usual procedure of curve-fitting fails miserably in fitting a spiral to empirical data. The difficulties in fitting a spiral to data become much more intensified when the observed points z = (x, y) are not measured from their origin (0, 0), but shifted away from the origin by (cx, cy). We intend in this paper to devise a method to fit a logarithmic spiral to empirical data measured with a displaced origin. The optimization has been done by the Differential Evolution method of Global Optimization. The method is also be tested on numerical data. It appears that our method is successful in estimating the parameters of a logarithmic spiral. However, the estimated values of the parameters of a logarithmic spiral (a and b in r = a*exp(b(theta+2*pi*k) are highly sensitive to the precision to which the shift parameters (cx and cy) are correctly estimated. The method is also very sensitive to the errors of measurement in (x, y) data. The method falters when the errors of measurement of a large magnitude contaminate (x, y). A computer program (Fortran) is appended. |
Keywords: | Logarithmic Spiral; Growth Spiral; Bernoulli Spiral; Equiangular Spiral; Cartesian Spiral; Empirical data; Shift in origin; change of origin; displaced pole; polar displacement; displaced origin; Curve Fitting; Spiral fitting; Box Algorithm; Differential Evolution method; Global optimization; Non-linear Programming; multi-modality; Rank size rule |
JEL: | C61 C63 C2 |
Date: | 2006–11–22 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:881&r=ecm |
By: | Refik Soyer (The George Washington University School of Business); Melinda Hock (Naval Research Laboratory) |
Date: | 2006–11 |
URL: | http://d.repec.org/n?u=RePEc:gwu:wpaper:0013&r=ecm |
By: | Shiyi Chen; Wolfgang Härdle; Rouslan Moro |
Abstract: | Predicting default probabilities is important for firms and banks to operate successfully and to estimate their specific risks. There are many reasons to use nonlinear techniques for predicting bankruptcy from financial ratios. Here we propose the so called Support Vector Machine (SVM) to estimate default probabilities of German firms. Our analysis is based on the Creditreform database. The results reveal that the most important eight predictors related to bankruptcy for these German firms belong to the ratios of activity, profitability, liquidity, leverage and the percentage of incremental inventories. Based on the performance measures, the SVM tool can predict a firms default risk and identify the insolvent firm more accurately than the benchmark logit model. The sensitivity investigation and a corresponding visualization tool reveal that the classifying ability of SVM appears to be superior over a wide range of the SVM parameters. Based on the nonparametric Nadaraya-Watson estimator, the expected returns predicted by the SVM for regression have a significant positive linear relationship with the risk scores obtained for classification. This evidence is stronger than empirical results for the CAPM based on a linear regression and confirms that higher risks need to be compensated by higher potential returns. |
Keywords: | Support Vector Machine, Bankruptcy, Default Probabilities Prediction, Expected Profitability, CAPM. |
JEL: | C14 G33 C45 G32 |
Date: | 2006–11 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2006-077&r=ecm |