nep-for New Economics Papers
on Forecasting
Issue of 2012‒05‒02
twenty papers chosen by
Rob J Hyndman
Monash University

  1. Forecasting by factors, by variables, or both? By Jennifer L. Castle; David F. Hendry; Michael P. clements
  2. Point and interval forecasts of age-specific fertility rates: a comparison of functional principal component methods By Han Lin Shang
  3. Evaluating the forecast quality of GDP components: An application to G7 By Paulo Júlio; Pedro M. Esperança
  4. Robust Ranking of Multivariate GARCH Models by Problem Dimension By Michael McAleer; Massimiliano Caporin
  5. Forecast bias in two dimensions By Dean Croushore
  6. Forecasting Korean inflation By In Choi; Seong Jin Hwang
  7. Model Selection in Kernel Ridge Regression By Peter Exterkate
  8. Demand and supply of cereals in India: 2010-2025 By Ganesh-Kumar, A.; Mehta, Rajesh; Pullabhotla, Hemant; Prasad, Sanjay K.; Ganguly, Kavery; Gulati, Ashok
  9. Risk spillovers in international equity portfolios By Matteo Bonato; Massimiliano Caporin; Angelo Ranaldo
  10. How forward looking are central banks? Some evidence from their forecasts By Michał Brzoza-Brzezina; Jacek Kotłowski; Agata Miśkowiec
  11. Alternative Modeling for Long Term Risk. By Dominique Guegan; Xin Zhao
  12. On the empirical evidence of asymmetry effects in the interest rate pass-through in Poland By Anna Sznajderska
  13. Communication of uncertainty in weather forecasts By Marimo, Pricilla; Kaplan, Todd R; Mylne, Ken; Sharpe, Martin
  14. Forecasting the Brazilian Real and the Mexican Peso: Asymmetric Loss, Forecast Rationality, and Forecaster Herding By Ingrid Groessl; Nadine Levratto
  15. Nonparametric prediction of stock returns guided by prior knowledge By Michael Scholz; Jens Perch Nielsen; Stefan Sperlich
  16. Estimating a Semiparametric Asymmetric Stochastic Volatility Model with a Dirichlet Process Mixture By Mark J Jensen; John M Maheu
  17. Using Merton model: an empirical assessment of alternatives By Zvika Afik; Ohad Arad; Koresh Galil
  18. Support Vector Machines with Evolutionary Feature Selection for Default Prediction By Wolfgang Karl Härdle; Dedy Dwi Prastyo; Christian Hafner
  19. Projecting China's Current Account Surplus By William R. Cline
  20. The Prospects of the Baby Boomers: Methodological Challenges in Projecting the Lives of an Aging Cohort By Christian Westermeier; Anika Rasner; Markus M. Grabka

  1. By: Jennifer L. Castle; David F. Hendry; Michael P. clements
    Abstract: We consider forecasting with factors, variables and both, modeling in-sample using Autometrics so all principal components and variables can be included jointly, while tackling multiple breaks by impulse-indicator saturation. A forecast-error taxonomy for factor models highlights the impacts of location shifts on forecast-error biases. Forecasting US GDP over 1-, 4- and 8-step horizons using the dataset from Stock and Watson (2009) updated to 2011:2 shows factor models are more useful for nowcasting or short-term forecasting, but their relative performance declines as the forecast horizon increases. Forecasts for GDP levels highlight the need for robust strategies such as intercept corrections or differencing when location shifts occur, as in the recent financial crisis.
    Keywords: Model selection, Factor models, Forecasting, Impulse-indicator saturation, Autometrics
    JEL: C51 C22
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:600&r=for
  2. By: Han Lin Shang
    Abstract: Accurate forecasts of age-specific fertility rates are critical for government policy, planning and decision making. With the availability of Human Fertility Database (2011), we compare the empirical accuracy of the point and interval forecasts, obtained by the approach of Hyndman and Ullah (2007) and its variants for forecasting age-specific fertility rates. The analyses are carried out using the age-specific fertility data of 15 mostly developed countries. Based on the one-step-ahead to 20-step-ahead forecast error measures, the weighted Hyndman-Ullah method provides the most accurate point and interval forecasts for forecasting age-specific fertility rates, among all the methods we investigated.
    Keywords: Functional data analysis, functional principal component analysis, forecast accuracy comparison, random walk with drift, random walk, ARIMA model
    JEL: J11 J13 C14
    Date: 2012–04–16
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2012-10&r=for
  3. By: Paulo Júlio (Office for Strategy and Studies, Portuguese Ministry of Economy and Employment); Pedro M. Esperança
    Abstract: We evaluate the quality of OECD's and IMF's forecasts for real GDP growth and for GDP expenditure components. We use a scaled statistic to compare the prediction models' performance across GDP components with different volatilities and decompose the GDP forecast error into the corresponding component contributions. Moreover, we use two recently proposed statistics - Mean of Total Weighted Absolute Error and Mean of Total Weighted Squared Error - to evaluate the overall accuracy of component predictions. We conclude that overpredictions in investment and net exports explain GDP overpredictions at 1-year horizons. Accurate GDP forecasts for same-year predictions are mostly explained by canceling out effects in component prediction errors - mainly in exports and imports - rather than by accurate component predictions. We also show that forecasts are in general inefficient for both GDP and its components and that the 2008 crisis had a large negative effect on the quality of forecasts being issued, but not on the predictive quality of forecast models.
    Keywords: Forecast evaluation, GDP expenditure components, Mean of total weighted absolute error, Mean of total weighted squared error, G7, 2008 crisis
    JEL: C52 C53 E37
    Date: 2012–04
    URL: http://d.repec.org/n?u=RePEc:mde:wpaper:0047&r=for
  4. By: Michael McAleer (Erasmus University Rotterdam,Tinbergen Institute,Kyoto University,Complutense University of Madrid); Massimiliano Caporin (Department of Economics and Management“Marco Fanno”University of Padova,Italy)
    Abstract: During the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. Recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. We provide an empirical comparison of alternative MGARCH models, namely BEKK, DCC, Corrected DCC (cDCC), CCC, OGARCH Exponentially Weighted Moving Average, and covariance shrinking, using historical data for 89 US equities. We contribute to the literature in several directions. First, we consider a wide range of models, including the recent cDCC and covariance shrinking models. Second, we use a range of tests and approaches for direct and indirect model comparison, including the Model Confidence Set. Third, we examine how the robust model rankings are influenced by the cross- sectional dimension of the problem.
    Keywords: Covariance forecasting, model confidence set, robust model ranking, MGARCH, robust model comparison.
    Date: 2012–04
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:815&r=for
  5. By: Dean Croushore
    Abstract: Economists have tried to uncover stylized facts about people’s expectations, testing whether such expectations are rational. Tests in the early 1980s suggested that expectations were biased, and some economists took irrational expectations as a stylized fact. But, over time, the results of tests that led to such a conclusion were reversed. In this paper, we examine how tests for bias in expectations, measured using the Survey of Professional Forecasters, have changed over time. In addition, key macroeconomic variables that are the subject of forecasts are revised over time, causing problems in determining how to measure the accuracy of forecasts. The results of bias tests are found to depend on the subsample in question, as well as what concept is used to measure the actual value of a macroeconomic variable. Thus, our analysis takes place in two dimensions: across subsamples and with alternative measures of realized values of variables.
    Keywords: Forecasting ; Rational expectations (Economic theory)
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:fip:fedpwp:12-9&r=for
  6. By: In Choi (Department of Economics, Sogang University, Seoul); Seong Jin Hwang
    Abstract: This paper studies the performance of various forecasting models for Ko- rean inflation rates. The models studied in this paper are the AR(p) model, the dynamic predictive regression model with such exogenous variables as the un- employment rate and the term spread, the inflation target model, the random- walk model, and the dynamic predictive regression model using estimated fac- tors along with the unemployment rate and the term spread. The sampling period studied in this paper is 2000M11-2011M06. Among the studied models, the dynamic predictive regression model using estimated factors along with the unemployment rate and the term spread tends to perform best at the 6-month horizon when the factors are extracted from I(0) series and the variables for the factor extraction are selected by the criterion of the correlation of each variable with the inflation rate. The dynamic predictive regression models with the unemployment rate and the term spread also work well at shorter horizons.
    Keywords: inflation forecasting, Phillips curve, term spread, factor model, principal-component estimation, generalized principal-component estimation
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:sgo:wpaper:1202&r=for
  7. By: Peter Exterkate (Department of Economics and CREATES Aarhus University)
    Abstract: Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely applicable, and we recommend their use instead of the popular polynomial kernels in general settings, in which no information on the data-generating process is available.
    Keywords: Nonlinear forecasting, shrinkage estimation, kernel methods, high dimensionality
    JEL: C51 C53 C63
    Date: 2012–02–28
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-10&r=for
  8. By: Ganesh-Kumar, A.; Mehta, Rajesh; Pullabhotla, Hemant; Prasad, Sanjay K.; Ganguly, Kavery; Gulati, Ashok
    Abstract: This paper attempts to project the future supply and demand up to the year 2025 for rice and wheat, the two main cereals cultivated and consumed in India. A review of studies that forecast the supply and demand of Indian agriculture commodities revealed three important limitations of such studies: (a) The forecasts are generally overestimated (in the ex post situation); (b) the methodology is not clearly outlined; (c) ex-ante validation of the forecast have not been carried out. This study presents forecasts based on models that are validated so that forecast performances can be assessed. In this study, a quadratic almost ideal demand system (QUAIDS), which allows for expenditure shares to rise or fall with rising incomes, has been used to model household demand. The model has been estimated with data on consumption of 11 major agricultural commodities from the 61st Round of the National Sample Survey (NSS) for year 2004–05 (NSSO 2006). Our estimates suggest that the demand elasticity with respect to total food expenditure is negative for rice, wheat, and pulses, which are plausible given the observed fall in the consumption of these commodities on a per capita basis over a fairly long period of time even as income levels rose in the country. Validation of this model with actual values for 2007–08 and 1993–94 from the NSS shows that the forecasts errors are less than 3 percent for the two cereals lending confidence to the model's forecasting ability for future years. Under different scenarios on income growth, monthly per capita consumption of rice and wheat in 2025 is forecast to decline to 5.5 and 4.1 kgs, respectively, from their 2004–05 levels of 6.1 and 4.4 kgs, respectively. Scaling up these projections with the official forecasts of population from the Government of India and adding the indirect demand to the direct demand gives us the forecasts of the total demand. Total demand of rice and wheat in 2025 is forecasted to be in the range of 104.7–108.6 and 91.4–101.7 million tons, respectively. Supply of rice and wheat were modeled through two approaches. First, a Cobb-Douglas production functions relating crop output to the price of the crop compared to the price of competing crops and the price of fertilizer, the total area and proportion of irrigated area under the crop, total fertilizer consumption, and annual average rainfall were estimated using national level data over the period 1981–82 through 2007–08. In the second approach, crop output was determined as the product of the total acreage under that crop and its yield, which were modeled separately. Crop acreage was modeled as a function of the relative price of the crop, rainfall and the irrigated crop areas, with irrigation being modeled separately as a function of investment. Crop yields were modeled as a function of the relative price of that crop with its competing crops and the price of fertilizer, proportion of irrigated area under the crop, total fertilizer consumption, and annual average rainfall. Acreage and yield functions were estimated with data for the period 1981–82 to 2007–08. Based on these alternative models, supply of rice and wheat in 2025 is forecasted to be in the range of 135.5–165.6 and 93.6–114.6 million tons, respectively, under different scenarios on investments and fertilizer growth. These forecasts suggest that under reasonable scenarios on demand and supply, the country is likely to face growing surplus in rice, from 15.5–30.8 million tons in 2015 to 26.9–60.9 million tons in 2025. There will also likely be a surplus of wheat, though some deficit in 2025 cannot be ruled out. The range of surplus for wheat is 5.0–20.4 million tons in 2015, while in 2025 it ranges from a deficit of 8.1 million tons to a surplus of 28.3 million tons. These trends suggest that managing surpluses rather than deficits is likely to be the bigger policy challenge for India in the future, especially in the case of rice.
    Keywords: cereal demand, cereal supply,
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:fpr:ifprid:1158&r=for
  9. By: Matteo Bonato; Massimiliano Caporin; Angelo Ranaldo
    Abstract: We define risk spillover as the dependence of a given asset variance on the past covariances and variances of other assets. Building on this idea, we propose the use of a highly flexible and tractable model to forecast the volatility of an international equity portfolio. According to the risk management strategy proposed, portfolio risk is seen as a specific combination of daily realized variances and covariances extracted froma high frequency dataset, which includes equities and currencies. In this framework, we focus on the risk spillovers across equities within the same sector (sector spillover), and fromcurrencies to international equities (currency spillover). We compare these specific risk spillovers to a more general framework (full spillover) whereby we allow for lagged dependence across all variances and covariances. The forecasting analysis shows that considering only sector- and currency-risk spillovers, rather than full spillovers, improves performance, both in economic and statistical terms.
    Keywords: Risk spillover, portfolio risk, currency risk, variance forecasting, international portfolio, Wishart distribution
    JEL: C13 C16 C22 C51 C53 G17
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:snb:snbwpa:2012-03&r=for
  10. By: Michał Brzoza-Brzezina (National Bank of Poland, Warsaw School of Economics); Jacek Kotłowski (National Bank of Poland, Warsaw School of Economics); Agata Miśkowiec (Warsaw School of Economics)
    Abstract: We estimate forward-looking Taylor rules on data from macroeconomic forecasts of three central banks (Bank of England, National Bank of Poland and Swiss National Bank) in order to determine the extent to which these banks are forward looking in their monetary policy decisions. We find that all three banks are to some extent forward-looking, however to a varying degree. With respect to inflation, the NBP and the SNB look far into the future, while the BoE seems to concentrate on current inflation. As to output, the BoE and the SNB take into account its future or current value while for the NBP this variable is insignificant. We also find that central banks prefer to concentrate on one particular horizon rather than take into account the whole forecast.
    Keywords: Taylor rule, forward-looking monetary policy, feedback horizon
    JEL: C25 E52 E58
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:nbp:nbpmis:112&r=for
  11. By: Dominique Guegan (Centre d'Economie de la Sorbonne - Paris School of Economics); Xin Zhao (Centre d'Economie de la Sorbonne)
    Abstract: In this paper, we propose an alternative approach to estimate long-term risk. Instead of using the static square root method, we use a dynamic approach based on volatility forecasting by non-linear models. We explore the possibility of improving the estimations by different models and distributions. By comparing the estimations of two risk measures, value at risk and expected shortfall, with different models and innovations at short, median and long-term horizon, we find out that the best model varies with the forecasting horizon and the generalized Pareto distribution gives the most conservative estimations with all the models at all the horizons. The empirical results show that the square root method underestimates risk at long horizon and our approach is more competitive for risk estimation at long term.
    Keywords: Long memory, Value at Risk, expect shortfall, extreme value distribution.
    JEL: G32 G17 C58
    Date: 2012–03
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:12025&r=for
  12. By: Anna Sznajderska (National Bank of Poland)
    Abstract: This paper empirically examines the potential asymmetries in the interest rate pass-through in Poland. We investigate the chosen retail interest rates in commercial banks on deposits and loans denominated in the Polish currency. It is considered whether their adjustment to changes in interbank rates is asymmetric in the long term as well as in the short term. We test for asymmetric cointegration using threshold autoregressive models and momentum-threshold autoregressive models. Next, if it is possible applying the threshold error correction models, we search for asymmetries associated with the direction of change in the money market rate, the level of the economic activity, the level of liquidity in the banking sector, the central bank’s credibility and the economic agents’ expectations. Finally, we test whether using the asymmetric models improves the quality of forecasts of retail bank interest rates.
    Keywords: interest rate pass-through, asymmetries, threshold models, forecasting
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:nbp:nbpmis:114&r=for
  13. By: Marimo, Pricilla; Kaplan, Todd R; Mylne, Ken; Sharpe, Martin
    Abstract: Experimental economics methods were used to assess public understanding of information in weather forecasts and test whether the participants were able to make better decisions using the probabilistic information presented in table or bar graph formats than if they are presented with a deterministic forecast. We asked undergraduate students from the University of Exeter to choose the most probable temperature outcome between a set of “lotteries” based on the temperature up to five days ahead. If they choose a true statement, participants were rewarded with a cash reward. Results indicate that on average participants provided with uncertainty information make better decisions than those without. Statistical analysis indicates a possible learning effect as the experiment progressed. Furthermore, participants who were shown the graph with uncertainty information took on average less response time compared to those who were shown a table with uncertainty information.
    Keywords: experimental economics; uncertainty; decision making; bar graph; table
    JEL: D81 D83 C91
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:38287&r=for
  14. By: Ingrid Groessl (Universitaet Hamburg, School of Business, Economics and Social Sciences, Department of Socioeconomics); Nadine Levratto (Université de Paris ouest Nanterre)
    Abstract: Economic theory conjectures complementarities between the ranking of creditors in formal insolvency proceedings and the use of collateral in bank loan contracts as well as the existence of relational compared to arm’s length lending. In this paper we seek evidence for these hypotheses taking France and Germany as examples which differ significantly concerning the ranking of in particular secured creditors. On closer scrutiny of empirical studies as well as statistical information we can neither confirm that a high priority for se-cured lenders explains an excessive use of collateral in bank loans nor that a priority for inside collateral promotes relational lending. Regarding relational lending we point to variables lying outside the insolvency law, like culture and history.
    Keywords: Insolvency; France; Germany; bank-borrower-relationships; collateral; variety-of-capital-approach; law and finance
    JEL: K12 K22 G21 G33
    Date: 2012–04
    URL: http://d.repec.org/n?u=RePEc:hep:macppr:201203&r=for
  15. By: Michael Scholz (Karl-Franzens University Graz); Jens Perch Nielsen (Cass Business School); Stefan Sperlich (Universitie de Geneve)
    Abstract: One of the most studied questions in economics and finance is whether equity returns or premiums can be predicted by empirical models. While many authors favor the historical mean or other simple parametric methods, this article focuses on nonlinear relationships. A straightforward bootstrap-test confirms that non- and semiparametric techniques help to obtain better forecasts. It is demonstrated how economic theory directly guides a model in an innovative way. The inclusion of prior knowledge enables for American data a further notable improvement in the prediction of excess stock returns of 35% compared to the fully nonparametric model, as measured by the more complex validated R2 as well as using classical out-of-sample validation. Statistically, a bias and dimension reduction method is proposed to import more structure in the estimation process as an adequate way to circumvent the curse of dimensionality.
    Keywords: Prediction of Stock Returns, Cross-Validation, Prior Knowledge, Bias Reduction, Dimension Reduction
    JEL: C14 C53 G17
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:grz:wpaper:2012-02&r=for
  16. By: Mark J Jensen; John M Maheu
    Abstract: In this paper we extend the parametric, asymmetric, stochastic volatility model (ASV), where returns are correlated with volatility, by flexibly modeling the bivariate distribution of the return and volatility innovations nonparametrically. Its novelty is in modeling the joint, conditional, return-volatility, distribution with a infinite mixture of bivariate Normal distributions with mean zero vectors, but having unknown mixture weights and covariance matrices. This semiparametric ASV model nests stochastic volatility models whose innovations are distributed as either Normal or Student-t distributions, plus the response in volatility to unexpected return shocks is more general than the fixed asymmetric response with the ASV model. The unknown mixture parameters are modeled with a Dirichlet Process prior. This prior ensures a parsimonious, finite, posterior, mixture that bests represents the distribution of the innovations and a straightforward sampler of the conditional posteriors. We develop a Bayesian Markov chain Monte Carlo sampler to fully characterize the parametric and distributional uncertainty. Nested model comparisons and out-of-sample predictions with the cumulative marginal-likelihoods, and one-day-ahead, predictive log-Bayes factors between the semiparametric and parametric versions of the ASV model shows the semiparametric model forecasting more accurate empirical market returns. A major reason is how volatility responds to an unexpected market movement. When the market is tranquil, expected volatility reacts to a negative (positive) price shock by rising (initially declining, but then rising when the positive shock is large). However, when the market is volatile, the degree of asymmetry and the size of the response in expected volatility is muted. In other words, when times are good, no news is good news, but when times are bad, neither good nor bad news matters with regards to volatility.
    Keywords: Bayesian nonparametrics, cumulative Bayes factor, Dirichlet process mixture, inï¬nite mixture model, leverage effect, marginal likelihood, MCMC, non-normal, stochastic volatility, volatility-return relationship
    JEL: C11 C14 C53 C58
    Date: 2012–04–20
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-453&r=for
  17. By: Zvika Afik (Guilford Glazer faculty of Business and Management, Ben- Gurion University of the Negev, Israel); Ohad Arad (Ben Gurion University of the Negev, Beer-Sheva, Israel); Koresh Galil (Ben Gurion University of the Negev, Beer-Sheva, Israel)
    Abstract: Merton (1974) suggested a structural model for default prediction which allows using timely information from the equity market. The literature describes several specifications to the application of the model, including methods presumably used by practitioners. However, recent studies demonstrate that these methods result in inferior estimates compared to simpler substitutes. We empirically examine various specification alternatives and find that the prediction goodness is only slightly sensitive to different choices of default barrier, whereas the choice of assets expected return and assets volatility is significant. Equity historical return and historical volatility produce underbiased estimates for assets expected return and assets volatility, especially for defaulting firms. Acknowledging these characteristics we suggest specifications that improve the model accuracy.
    Keywords: Credit risk; Default prediction; Merton model; Bankruptcy prediction, Default barrier; Assets volatility
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:bgu:wpaper:1202&r=for
  18. By: Wolfgang Karl Härdle; Dedy Dwi Prastyo; Christian Hafner
    Abstract: Predicting default probabilities is at the core of credit risk management and is becoming more and more important for banks in order to measure their client's degree of risk, and for rms to operate successfully. The SVM with evolutionary feature selection is applied to the CreditReform database. We use classical methods such as discriminan analysis (DA), logit and probit models as benchmark On overall, GA-SVM is outperforms compared to the benchmark models in both training and testing dataset.
    Keywords: SVM, Genetic algorithm, global optmimum, default prediction
    JEL: C14 C45 C61 C63 G33
    Date: 2012–04
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012-030&r=for
  19. By: William R. Cline (Peterson Institute for International Economics)
    Abstract: For several years China has run current account surpluses that have been widely seen as the most serious source of global imbalances on the surplus side. Its exchange rate intervention limited appreciation of the currency and led to a buildup of external reserves to more than $3 trillion. Nonetheless, the surplus has fallen from 10 percent of GDP in 2007 to 2.8 percent in 2011, even though in September the International Monetary Fund projected the 2011 surplus at 5.2 percent of GDP and forecast a rebound to 7.2 percent of GDP by 2016. This policy brief examines whether the moderate 2011 surplus was a transitory aberration or a sign of a new trend. A statistical model explains the bulk of the reduction in the surplus as the consequence of the real exchange rate appreciation of about 20 percent that occurred from 2005–06 to 2009–10. Slow global growth, a rising oil deficit, and erosion in the capital income balance were additional causes. Projections based on this model and another used by the author indicate that if the exchange rate remains unchanged, the surplus is likely to be in a range of 2–4 percent of GDP in 2012–14 but rebound to 4 to 5 percent of GDP by 2017. If instead the government continues real appreciation at the 3 percent annual rate pursued since June 2010, by 2017 the current account would be approximately in balance.
    Date: 2012–04
    URL: http://d.repec.org/n?u=RePEc:iie:pbrief:pb12-7&r=for
  20. By: Christian Westermeier; Anika Rasner; Markus M. Grabka
    Abstract: In most industrialized countries, the work and family patterns of the baby boomers characterized by more heterogeneous working careers and less stable family lives set them apart from preceding cohorts. Thus, it is of crucial importance to understand how these different work and family lives are linked to the boomers’ prospective material well-being as they retire. This paper presents a new and unique matching-based approach for the projection of the life courses of German baby boomers, called the LAW-Life Projection Model. Basis for the projection are data from 27 waves of the German Socio-Economic Panel linked with administrative pension records from the German Statutory Pension In-surance that cover lifecycle pension-relevant earnings. Unlike model-based micro simula-tions that age the data year by year our matching-based projection uses sequences from older birth cohorts to complete the life-courses of statistically similar baby boomers through to retirement. An advantage of this approach is to coherently project the work-life and family trajectories as well as lifecycle earnings. The authors present a benchmark anal-ysis to assess the validity and accuracy of the projection. For this purpose, they cut a signif-icant portion of already lived lives and test different combinations of matching algorithms and donor pool specifications to identify the combination that produces the best fit be-tween previously cut but observed and projected life-course information. Exploiting the advantages of the projected data, the authors compare the returns to education - measured in terms of pension entitlements – across cohorts. The results indicate that within cohorts, differences between individuals with low and high educational attainment increase over time for men and women in East and West Germany. East German boomer women with low educational attainment face the most substantial losses in pension entitlements that put them at a high risk of being poor as they retire.
    Keywords: Forecasting Models, simulation methods, SOEP, baby boomers, education, public pensions
    JEL: C53 H55 I24
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:diw:diwsop:diw_sp440&r=for

This nep-for issue is ©2012 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.