nep-fmk New Economics Papers
on Financial Markets
Issue of 2009‒03‒22
eight papers chosen by
Kwang Soo Cheong
Johns Hopkins University

  1. Making sense of the subprime crisis By Kristopher S. Gerardi; Andreas Lehnert; Shane M. Sherland; Paul S. Willen
  2. AIG and the Fed: Prologue to future financial regulation? By Tatom, John
  3. The Credit Ratings Game By Patrick Bolton; Xavier Freixas; Joel Shapiro
  4. What Segments Equity Markets? By Geert Bekaert; Campbell R. Harvey; Christian Lundblad; Stephan Siegel
  5. Robust Equilibrium Yield Curves By Isaac Kleshchelski; Nicolas Vincent
  6. A Decision Rule to Minimize Daily Capital Charges in Forecasting Value-at-Risk By Juan Angel Jiménez Martín; Michael McAleer; Teodosio Pérez-Amaral
  7. The Ten Commandments for Optimizing Value-at-Risk and Daily Capital Charges By Michael McAleer
  8. The Time-Series Properties on Housing Prices: A Case Study of the Southern California Market By Rangan Gupta; Stephen M. Miller

  1. By: Kristopher S. Gerardi; Andreas Lehnert; Shane M. Sherland; Paul S. Willen
    Abstract: This paper explores the question of whether market participants could have or should have anticipated the large increase in foreclosures that occurred in 2007 and 2008. Most of these foreclosures stem from loans originated in 2005 and 2006, leading many to suspect that lenders originated a large volume of extremely risky loans during this period. However, the authors show that while loans originated in this period did carry extra risk factors, particularly increased leverage, underwriting standards alone cannot explain the dramatic rise in foreclosures. Focusing on the role of house prices, the authors ask whether market participants underestimated the likelihood of a fall in house prices or the sensitivity of foreclosures to house prices. The authors show that, given available data, market participants should have been able to understand that a significant fall in prices would cause a large increase in foreclosures, although loan-level (as opposed to ownership-level) models would have predicted a smaller rise than actually occurred. Examining analyst reports and other contemporary discussions of the mortgage market to see what market participants thought would happen, the authors find that analysts, on the whole, understood that a fall in prices would have disastrous consequences for the market but assigned a low probability to such an outcome.
    Keywords: Subprime mortgage ; Financial crises ; Foreclosure
    Date: 2009
  2. By: Tatom, John
    Abstract: Financial sector regulatory reform has been a leading national issue since the U.S. Treasury issued its Blueprint for reform in spring (2008). The mortgage foreclosure and financial crises reinforced popular interest in whether the U.S. regulatory framework was deficient and how to fix the regulatory framework. Meanwhile, some key decisions in the United States, particularly concerning the failure and bailout of AIG and some investment banks in fall 2008, have established precedents for a new regulatory framework and policies. Where policymakers go from here is not certain, but the ideas on the table and the direction of policy suggest that the role of the Federal Reserve (Fed) in financial regulation will become central, at least in critical periods. This paper reviews the calls for a new “financial stability” regulator and the potential role of the Fed as such a regulator. It argues that the takeover of AIG provides a useful example and precursor of the Fed’s suitability in that role. Section 1 explains the Fed’s role as regulator and the relationship of the Fed’s lender of last resort function to systemic risk. It also addresses recent changes in the notion of systemic risk and systemically significant firms in concluding that there is a remaining case for a new regulator of such risk. Section II reviews the financial problems of AIG and the changing intervention of the Fed and the U.S. Treasury in AIG. The last section takes up some related issues, the role of a central bank versus a Financial Stability Authority in regulating banks or systemic risk, the potential role of the Fed or a another federal regulator in regulating the insurance industry and the risk to Fed independence from extending its regulatory role to cover systemic risk. The Fed’s actions with regard to AIG provide strong evidence that broadening the too big to fail policy or broadening the Fed’s lender of last resort policy to include non-bank firms pose strong conflicts for the achievement of the objectives of monetary policy or of financial stability. Moreover, the loss experience of AIG indicates the problems of fragmented or absent federal regulation of insurers for regulatory reform.
    Keywords: Financial Regulation; Central Banking; Systemic Risk;
    JEL: E58 G28
    Date: 2009–02–28
  3. By: Patrick Bolton; Xavier Freixas; Joel Shapiro
    Abstract: The spectacular failure of top-rated structured finance products has brought renewed attention to the conflicts of interest of Credit Rating Agencies (CRAs). We model both the CRA conflict of understating credit risk to attract more business, and the issuer conflict of purchasing only the most favorable ratings (issuer shopping), and examine the effectiveness of a number of proposed regulatory solutions of CRAs. We find that CRAs are more prone to inflate ratings when there is a larger fraction of naive investors in the market who take ratings at face value, or when CRA expected reputation costs are lower. To the extent that in booms the fraction of naive investors is higher, and the reputation risk for CRAs of getting caught understating credit risk is lower, our model predicts that CRAs are more likely to understate credit risk in booms than in recessions. We also show that, due to issuer shopping, competition among CRAs in a duopoly is less efficient (conditional on the same equilibrium CRA rating policy) than having a monopoly CRA, in terms of both total ex-ante surplus and investor surplus. Allowing tranching decreases total surplus further. We argue that regulatory intervention requiring upfront payments for rating services (before CRAs propose a rating to the issuer) combined with mandatory disclosure of any rating produced by CRAs can substantially mitigate the con.icts of interest of both CRAs and issuers.
    Keywords: Credit rating agencies, con.icts of interest, ratings shopping
    JEL: D43 D82 G24 L15
    Date: 2009–01
  4. By: Geert Bekaert; Campbell R. Harvey; Christian Lundblad; Stephan Siegel
    Abstract: We propose a new, valuation-based measure of world equity market segmentation. While we observe decreased levels of segmentation in many developing countries, the level of segmentation is still significant. In contrast to previous research, we characterize the factors that account for variation in market segmentation both through time as well as across countries. While a country's regulation with respect to foreign capital flows is important in determining its level of segmentation, we find that non-regulatory factors are also related to the cross-sectional and time-series variation in the level of segmentation. We identify a country's political risk profile and its stock market development as two additional local segmentation factors as well as the U.S. corporate credit spread as a global segmentation factor.
    JEL: F00 F15 F21 F3 F43 G1 G15 P45 P48
    Date: 2009–03
  5. By: Isaac Kleshchelski; Nicolas Vincent
    Abstract: This paper studies the quantitative implications of the interaction between robust control and stochastic volatility for key asset pricing phenomena. We present an equilibrium term structure model in which output growth is conditionally heteroskedastic. The agent does not know the true model of the economy and chooses optimal policies that are robust to model misspecification. The choice of robust policies greatly amplifies the effect of conditional heteroskedasticity in consumption growth, improving the model's ability to explain asset prices. In a robust control framework, stochastic volatility in consumption growth generates both a state-dependent market price of model uncertainty and a stochastic market price of risk. We estimate the model using data from the bond and equity market, as well as consumtion data. We show that the model is consistent with key empirical regularities that characterize the bond and equity markets. We also characterize empirically the set of models the robust representative agent entertains, and show that this set is "small". In other words, it is statistically difficult to distinguish between models in this set.
    Keywords: Yield curve, market price of uncertainty, robust control
    JEL: D81 E43 G11 G12
    Date: 2009
  6. By: Juan Angel Jiménez Martín (Universidad Complutense de Madrid. Facultad de CC. Económicas y Empresariales. Dpto. de Fundamentos de Análisis Económico II.); Michael McAleer (Department of Quantitative Economics Complutense University of Madrid and Econometric Institute Erasmus University Rotterdam); Teodosio Pérez-Amaral (Department of Quantitative Economics Complutense University of Madrid)
    Abstract: Under the Basel II Accord, banks and other Authorized Deposit-taking Institutions (ADIs) have to communicate their daily risk estimates to the monetary authorities at the beginning of the trading day, using a variety of Value-at-Risk (VaR) models to measure risk. Sometimes the risk estimates communicated using these models are too high, thereby leading to large capital requirements and high capital costs. At other times, the risk estimates are too low, leading to excessive violations, so that realised losses are above the estimated risk. In this paper we propose a learning strategy that complements existing methods for calculating VaR and lowers daily capital requirements, while restricting the number of endogenous violations within the Basel II Accord penalty limits. We suggest a decision rule that responds to violations in a discrete and instantaneous manner, while adapting more slowly in periods of no violations. We apply the proposed strategy to Standard & Poor’s 500 Index and show there can be substantial savings in daily capital charges, while restricting the number of violations to within the Basel II penalty limits.
    Date: 2009
  7. By: Michael McAleer (Department of Quantitative Economics,Complutense University of Madrid and Econometric Institute Erasmus University Rotterdam)
    Abstract: Credit risk is the most important type of risk in terms of monetary value. Another key risk measure is market risk, which is concerned with stocks and bonds, and related financial derivatives, as well as exchange rates and interest rates. This paper is concerned with market risk management and monitoring under the Basel II Accord, and presents Ten Commandments for optimizing Value-at-Risk (VaR) and daily capital charges, based on choosing wisely from: (1) conditional, stochastic and realized volatility; (2) symmetry, asymmetry and leverage; (3) dynamic correlations and dynamic covariances; (4) single index and portfolio models; (5) parametric, semiparametric and nonparametric models; (6) estimation, simulation and calibration of parameters; (7) assumptions, regularity conditions and statistical properties; (8) accuracy in calculating moments and forecasts; (9) optimizing threshold violations and economic benefits; and (10) optimizing private and public benefits of risk management. For practical purposes, it is found that the Basel II Accord would seem to encourage excessive risk taking at the expense of providing accurate measures and forecasts of risk and VaR.
    Date: 2009
  8. By: Rangan Gupta (University of Pretoria); Stephen M. Miller (University of Connecticut and University of Nevada, Las Vegas)
    Abstract: We examine the time-series relationship between housing prices in eight Southern California metropolitan statistical areas (MSAs). First, we perform cointegration tests of the housing price indexes for the MSAs, finding seven cointegrating vectors. Thus, the evidence suggests that one common trend links the housing prices in these eight MSAs, a purchasing power parity finding for the housing prices in Southern California. Second, we perform temporal Granger causality tests revealing intertwined temporal relationships. The Santa Anna MSA leads the pack in temporally causing housing prices in six of the other seven MSAs, excluding only the San Luis Obispo MSA. The Oxnard MSA experienced the largest number of temporal effects from other MSAs, six of the seven, excluding only Los Angeles. The Santa Barbara MSA proved the most isolated in that it temporally caused housing prices in only two other MSAs (Los Angels and Oxnard) and housing prices in the Santa Anna MSA temporally caused prices in Santa Barbara. Third, we calculate out-of-sample forecasts in each MSA, using various vector autoregressive (VAR) and vector error-correction (VEC) models, as well as Bayesian, spatial, and causality versions of these models with various priors. Different specifications provide superior forecasts in the different MSAs. Finally, we consider the ability of theses time-series models to provide accurate out-of-sample predictions of turning points in housing prices that occurred in 2006:Q4. Recursive forecasts, where the sample is updated each quarter, provide reasonably good forecasts of turning points.
    Keywords: Housing prices, Forecasting
    JEL: C32 R31
    Date: 2009–03

This nep-fmk issue is ©2009 by Kwang Soo Cheong. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.