nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒09‒05
sixteen papers chosen by
Sune Karlsson
Orebro University

  1. Bayesian estimation of bandwidths for a nonparametric regression model with a flexible error density By Xibin Zhang; Maxwell L. King; Han Lin Shang
  2. Simulated Maximum Likelihood Estimation for Latent Diffusion Models By Tore Selland Kleppe; Jun Yu; Hans J. skaug
  3. Bayesian Hypothesis Testing in Latent Variable Models By Yong Li; Jun Yu
  4. TVICA - Time Varying Independent Component Analysis and Its Application to Financial Data By Ray-Bing Chen; Ying Chen; Wolfgang Härdle
  5. Speci…fication Sensitivities in Right-Tailed Unit Root Testing for Financial Bubbles By Shu-Ping Shi; Peter C.B. Phillips; Jun Yu
  6. Bounding a linear causal effect using relative correlation restrictions By Brian Krauth;
  7. Testing for Multiple Bubbles By Peter C.B. Phillips; Shu-Ping Shi; Jun Yu
  8. Combination Schemes for Turning Point Predictions By Monica Billio; Roberto Casarin; Francesco Ravazzolo; Herman K. van Dijk
  9. Conclusive Evidence on the Benefits of Temporal Disaggregation to Improve the Precision of Time Series Model Forecasts By Ramirez, Octavio A.
  10. Size and power properties of structural break unit root tests By Paresh Kumar Narayan; Stephan Popp
  11. Policy-related small.area estimation By LONGFORD Nicholas Tibor
  12. Partial Identification of Heterogeneity in Preference Orderings Over Discrete Choices By Itai Sher; Jeremy T. Fox; Kyoo il Kim; Patrick Bajari
  13. Forecast Optimality Tests in the Presence of Instabilities By Barbara Rossi; Tatevik Sekhposyan
  14. Adaptive Minimax Estimation over Sparse ?q-Hulls By Zhan Wang; Sandra Paterlini; Fuchang Gao; Yuhong Yang
  15. Operational–risk Dependencies and the Determination of Risk Capital By Stefan Mittnik; Sandra Paterlini; Tina Yener
  16. Rational Inattention to Discrete Choices: A New Foundation for the Multinomial Logit Model By Filip Matejka; Alisdair McKay

  1. By: Xibin Zhang; Maxwell L. King; Han Lin Shang
    Abstract: We approximate the error density of a nonparametric regression model by a mixture of Gaussian densities with means being the individual error realizations and variance a constant parameter. We investigate the construction of a likelihood and posterior for bandwidth parameters under this Gaussian-component mixture density of errors in a nonparametric regression. A Markov chain Monte Carlo algorithm is presented to sample bandwidths for the kernel estimators of the regression function and error density. A simulation study shows that the proposed Gaussian-component mixture density of errors is clearly favored against wrong assumptions of the error density. We apply our sampling algorithm to a nonparametric regression model of the All Ordinaries daily return on the overnight FTSE and S&P 500 returns, where the error density is approximated by the proposed mixture density. With the estimated bandwidths, we estimate the density of the one-step-ahead point forecast of the All Ordinaries return, and therefore, a distribution-free value-at-risk is obtained. The proposed Gaussian component mixture density of regression errors is also validated through the nonparametric regression involved in the state-price density estimation proposed by Aït-Sahalia and Lo (1998).
    Keywords: Bayes factors, Gaussian-component mixture density, Markov chain Monte Carlo, state-price density, value-at-risk.
    JEL: C11 C14 C15 G15
    Date: 2011–08–22
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2011-10&r=ecm
  2. By: Tore Selland Kleppe (Department of Mathematics,University of Bergen); Jun Yu (School of Economics, Singapore Management Unversity); Hans J. skaug (Department of Mathematics,University of Bergen)
    Abstract: In this paper a method is developed and implemented to provide the simulated maximum likelihood estimation of latent diffusions based on discrete data. The method is applicable to diffusions that either have latent elements in the state vector or are only observed at discrete time with a noise. Latent diffusions are very important in practical applications in nancial economics. The proposed approach synthesizes the closed form method of Aït-Sahalia (2008) and the ecient importance sampler of Richard and Zhang (2007). It does not require any inll observations to be introduced and hence is computationally tractable. The Monte Carlo study shows that the method works well in finite sample. The empirical applications illustrate usefulness of the method and find no evidence of infinite variance in the importance sampler.
    Keywords: Closed-form approximation; Diusion Model; Ecient importance sampler
    JEL: C11 C15 G12
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:siu:wpaper:10-2011&r=ecm
  3. By: Yong Li (Business School, Sun Yat-Sen University); Jun Yu (School of Economics, Singapore Management Unversity)
    Abstract: Hypothesis testing using Bayes factors (BFs) is known not to be well dened under the improper prior. In the context of latent variable models, an additional problem with BFs is that they are difficult to compute. In this paper, a new Bayesian method, based on decision theory and the EM algorithm, is introduced to test a point hypothesis in latent variable models. The new statistic is a by-product of the Bayesian MCMC output and, hence, easy to compute. It is shown that the new statistic is easy to interpret and appropriately defined under improper priors because the method employs a continuous loss function. The method is illustrated using a one-factor asset pricing model and a stochastic volatility model with jumps.
    Keywords: Bayes factors, Kullback-Leibler divergence, Decision theory, EM Algorithm, Markov Chain Monte Carlo
    JEL: C11 C12 G12
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:siu:wpaper:11-2011&r=ecm
  4. By: Ray-Bing Chen; Ying Chen; Wolfgang Härdle
    Abstract: Source extraction and dimensionality reduction are important in analyzing high dimensional and complex financial time series that are neither Gaussian distributed nor stationary. Independent component analysis (ICA) method can be used to factorize the data into a linear combination of independent compo- nents, so that the high dimensional problem is converted to a set of univariate ones. However conventional ICA methods implicitly assume stationarity or stochastic homogeneity of the analyzed time series, which leads to a low accu- racy of estimation in case of a changing stochastic structure. A time varying ICA (TVICA) is proposed here. The key idea is to allow the ICA filter to change over time, and to estimate it in so-called local homogeneous intervals. The question of how to identify these intervals is solved by the LCP (local change point) method. Compared to a static ICA, the dynamic TVICA pro- vides good performance both in simulation and real data analysis. The data example is concerned with independent signal processing and deals with a portfolio of highly traded stocks.
    Keywords: Adaptive Sequential Testing, Independent Component Analysis, Local Homogeneity, Signal Processing, Realized Volatility.
    JEL: C14
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2011-054&r=ecm
  5. By: Shu-Ping Shi (Research School of Economics, The Australian National University); Peter C.B. Phillips (Yale University, University of Auckland, University of Southampton & Singapore Management University); Jun Yu (School of Economics, Singapore Management Unversity)
    Abstract: Right-tailed unit root tests have proved promising for detecting exuberance in economic and …financial activities. Like left-tailed tests, the limit theory and test performance are sensitive to the null hypothesis and the model specifi…cation used in parameter estimation. This paper aims to provide some empirical guidelines for the practical implementation of right-tailed unit root tests, focussing on the sup ADF test of Phillips, Wu and Yu (2011), which implements a right-tailed ADF test repeatedly on a sequence of forward sample recursions. We analyze and compare the limit theory of the sup ADF test under different hypotheses and model speci…cations. The size and power properties of the test under various scenarios are examined in simulations and some recommendations for empirical practice are given. An empirical application to Nasdaq data reveals the practical importance of model speci…cation on test outcomes.
    Keywords: Unit root test; Mildly explosive process; Recursive regression; Size and power
    JEL: C15 C22
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:siu:wpaper:08-2011&r=ecm
  6. By: Brian Krauth (Simon Fraser University);
    Abstract: This paper describes and implements a simple approach to the most common problem in applied microeconometrics: estimating a linear causal effct when the explanatory variable of interest might be correlated with relevant unobserved variables. The main idea is to place restrictions on the correlation between the variable of interest and relevant unobserved variables relative to the correlation between the variable of interest and observed control variables. These relative correlation restrictions allow a researcher to construct informative bounds on parameter estimates, and to assess the sensitivity of conventional estimates to plausible deviations from the identifying assumptions. The estimation method and its properties are described, and two empirical applications are demonstrated.
    Keywords: sensitivity analysis, partial identification, endogeneity
    JEL: C1 C21
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:sfu:sfudps:dp11-02&r=ecm
  7. By: Peter C.B. Phillips (Yale University, University of Auckland, University of Southampton & Singapore Management University); Shu-Ping Shi (Research School of Economics, The Australian National University); Jun Yu (School of Economics, Singapore Management Unversity)
    Abstract: Identifying explosive bubbles that are characterized by periodically collapsing behavior over time has been a major concern in the literature and is of great importance for practitioners. The complexity of the nonlinear structure in multiple bubble phenomena diminishes the discriminatory power of existing tests, as evidenced in early simulations conducted by Evans (1991). Multiple collapsing bubble episodes within the same sample period make bubble diagnosis particularly di¢ cult and complicate attempts at econometric dating. The present paper systematically investigates these issues and develops new procedures for practical implementation and surveillance strategies by central banks. We show how the testing procedure and dating algorithm of Phillips, Wu and Yu (2011, PWY) is affected by multiple bubbles and may fail to be consistent. To assist performance in such contexts, the present paper proposes a generalized version of the sup ADF test of PWY that addresses the difficulty. The asymptotic distribution of the generalized test is provided and the test is shown to signfi…cantly improve discriminatory power in simulations. The paper advances a new date-stamping strategy for the origination and termination of multiple bubbles that is based on this generalized test and consistency of the date-stamping algorithm is established. The new strategy leads to distinct power gains over the date-stamping strategy of PWY when multiple bubbles occur. Empirical applications are conducted with both tests along with their respective date-stamping technology to S&P 500 stock market data from January 1871 to December 2010. The new approach identi…es many key historical episodes of exuberance and collapse over this period, whereas the strategy of PWY locates only two such episodes in the same sample range.
    Keywords: Date-stamping strategy, Generalized sup ADF test, Multiple bubbles, Rational bubble, Periodically collapsing bubbles, Sup ADF test
    JEL: C15 C22
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:siu:wpaper:09-2011&r=ecm
  8. By: Monica Billio (University of Venice, Gretta Assoc. and School for Advanced Studies In Venice); Roberto Casarin (University of Venice, Gretta Assoc. and School for Advanced Studies In Venice); Francesco Ravazzolo (Norges Bank); Herman K. van Dijk (Erasmus University Rotterdam, VU University Amsterdam)
    Abstract: We propose new forecast combination schemes for predicting turning points of business cycles. The combination schemes deal with the forecasting performance of a given set of models and possibly providing better turning point predictions. We consider turning point predictions generated by autoregressive (AR) and Markov-Switching AR models, which are commonly used for business cycle analysis. In order to account for parameter uncertainty we consider a Bayesian approach to both estimation and prediction and compare, in terms of statistical accuracy, the individual models and the combined turning point predictions for the United States and Euro area business cycles.
    Keywords: Turning Points; Markov-switching; Forecast Combination; Bayesian Model Averaging
    JEL: C11 C15 C53 E37
    Date: 2011–08–22
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20110123&r=ecm
  9. By: Ramirez, Octavio A.
    Abstract: Simulation methods are used to measure the expected differentials between the Mean Square Errors of the forecasts from models based on temporally disaggregated versus aggregated data. This allows for novel comparisons including long-order ARMA models, such as those expected with weekly data, under realistic conditions where the parameter values have to be estimated. The ambivalence of past empirical evidence on the benefits of disaggregation is addressed by analyzing four different economic time series for which relatively large sample sizes are available. Because of this, a sufficient number of predictions can be considered to obtain conclusive results from out-of-sample forecasting contests. The validity of the conventional method for inferring the order of the aggregated models is revised.
    Keywords: Data Aggregation, Efficient Forecasting, Research Methods/ Statistical Methods,
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:ags:ugeofs:113520&r=ecm
  10. By: Paresh Kumar Narayan; Stephan Popp
    Abstract: In this paper, we compare the small sample size and power properties of a newly developed endogenous structural break unit root test of Narayan and Popp (NP, 2010) with existing two break unit root tests, namely the Lumsdaine and Papell (LP, 1997) and the Lee and Strazicich (LS, 2003) tests. In contrast to the widely used LP and LS tests, the NP test chooses the break date by maximizing the significance of the break dummy coefficient. Using Monte Carlo simulations, we show that the NP test has better size and high power, and identifies the structural breaks accurately. Power and size comparisons of the NP test with the LP and LS tests reveal that the NP test is significantly superior.
    Keywords: size, power, structural breaks, unit root
    Date: 2011–08–29
    URL: http://d.repec.org/n?u=RePEc:dkn:ecomet:fe_2011_07&r=ecm
  11. By: LONGFORD Nicholas Tibor
    Abstract: A method of small-area estimation with a utility function is developed. The utility characterises a policy planned to be implemented in each area, based on the area’s estimate of a key quantity. It is shown that the commonly applied empirical Bayes and composite estimators are inefficient for a wide range of utility functions. Adaptations for limited budget to implement the policy are explored. An argument is presented for a closer integration of estimation and (regional) policy making.
    Keywords: Composition; empirical Bayes; expected loss; borrowing strenght; exploiting similarity; small-area estimation; utility function
    JEL: C13 C14 C44
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:irs:cepswp:2011-44&r=ecm
  12. By: Itai Sher; Jeremy T. Fox; Kyoo il Kim; Patrick Bajari
    Abstract: We study a variant of a random utility model that takes a probability distribution over preference relations as its primitive. We do not model products using a space of observed characteristics. The distribution of preferences is only partially identified using cross-sectional data on varying budget sets. Imposing monotonicity in product characteristics does not restore full identification. Using a linear programming approach to partial identification, we show how to obtain bounds on probabilities of any ordering relation. We also do constructively point identify the proportion of consumers who prefer one budget set over one or two others. This result is useful for welfare. Panel data and special regressors are two ways to gain full point identification.
    JEL: C25 L0
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:17346&r=ecm
  13. By: Barbara Rossi; Tatevik Sekhposyan
    Abstract: This paper proposes forecast optimality tests that can be used in unstable environments. They include tests for forecast unbiasedness, efficiency, encompassing, serial uncorrelation, and, in general, regression-based tests of forecasting ability. The proposed tests are applied to evaluate the rationality of the Federal Reserve Greenbook forecasts as well as a variety of survey-based private forecasts. In addition, we consider whether Money Market Services forecasts are rational. Our robust tests suggest more empirical evidence against forecast rationality than previously found but con…firm that the Federal Reserve has additional information about current and future states of the economy relative to market participants.
    Keywords: Forecasting, forecast optimality, regression-based tests of forecasting ability, Greenbook forecasts, survey forecasts, real-time data
    JEL: C22 C52 C53
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:duk:dukeec:11-18&r=ecm
  14. By: Zhan Wang; Sandra Paterlini; Fuchang Gao; Yuhong Yang
    Abstract: Given a dictionary of Mn initial estimates of the unknown true regression function, we aim to construct linearly aggregated estimators that target the best performance among all the linear combinations under a sparse q-norm (0 < q < 1) constraint on the linear coefficients. Besides identifying the optimal rates of aggregation for these ?q-aggregation problems, our multi-directional (or adaptive) aggregation strategies by model mixing or model selection achieve the optimal rates simultaneously over the full range of 0 < q < 1 for gen- eral Mn and upper bound tn of the q-norm. Both random and fixed designs, with known or unknown error variance, are handled, and the ?q-aggregations examined in this work cover major types of aggregation problems previously studied in the literature. Consequences on minimax-rate adaptive regression under ?q-constrained coefficients (0 < q < 1) are also pro- vided. Our results show that the minimax rate of ?q-aggregation (0 < q< 1) is basically deter- mined by an effective model size that depends on q, tn, Mn, and the sample size n in an easily interpretable way based on classical model selection theory that deals with a large number of models. In addition, in the fixed design case, the model selection approach is seen to yield optimal rate of convergence not only in expectation but also in probability. In contrast, the model mixing approach can have leading constant one in front of the target risk in the oracle inequality while not offering optimality in probability.
    Keywords: aggregation of estimates, ?q-aggregation, sparse regression, model mixing, model selection
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:mod:recent:070&r=ecm
  15. By: Stefan Mittnik; Sandra Paterlini; Tina Yener
    Abstract: With the advent of Basel II, risk–capital provisions need to also account for operational risk. The specification of dependence structures and the assessment of their effects on aggregate risk–capital are still open issues in modeling operational risk. In this paper, we investigate the potential consequences of adopting the restrictive Basel’s Loss Distribution Approach (LDA), as compared to strategies that take dependencies explicitly into account. Drawing on a real–world database, we fit alternative dependence structures, using parametric copulas and nonparametric tail–dependence coefficients, and discuss the implications on the estimation of aggregate risk capital. We find that risk–capital estimates may increase relative to that derived for the LDA when accounting explicitly for the presence of dependencies. This phenomenon is not only be due to the (fitted) characteristics of the data, but also arise from the specific Monte Carlo setup in simulation–based risk–capital analysis.
    Keywords: Copula, Nonparametric Tail Dependence, Basel II, Loss Distribution Approach, Value–at–Risk, Subadditivity
    JEL: C14 C15 G10 G21
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:mod:recent:071&r=ecm
  16. By: Filip Matejka; Alisdair McKay
    Abstract: Often, individuals must choose among discrete alternatives with imperfect information about their values, such as selecting a job candidate, a vehicle or a university. Before choosing, they may have an opportunity to study the options, but doing so is costly. This costly information acquisition creates new choices such as the number of and types of questions to ask the job candidates. We model these situations using the tools of the rational inattention approach to information frictions (Sims, 2003). We find that the decision maker's optimal strategy results in choosing probabilistically exactly in line with the multinomial logit model. This provides a new interpretation for a workhorse model of discrete choice theory. We also study cases for which the multinomial logit is not applicable, in particular when two options are duplicates. In such cases, our model generates a generalization of the logit formula, which is free of the limitations of the standard logit.
    Keywords: rational inattention; discrete choice; logit model
    JEL: D81 D83 D01
    Date: 2011–06
    URL: http://d.repec.org/n?u=RePEc:cer:papers:wp442&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.