nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒10‒25
thirty-one papers chosen by
Sune Karlsson
Orebro University

  1. A Consistent Nonparametric Bootstrap Test of Exogeneity By Jinhyun Lee
  2. A Lagrange multiplier-type test for idiosyncratic unit roots in the exact factor model under misspecification By Solberger M.; Zhou X.
  3. LM-type tests for idiosyncratic and common unit roots in the exact factor model with AR(1) dynamics By Zhou X.; Solberger M.
  4. Oracle Properties and Finite Sample Inference of the Adaptive Lasso for Time Series Regression Models By Audrino, Francesco; Camponovo, Lorenzo
  5. Assessing the number of components in a normal mixture: an alternative approach By Maciejowska, Katarzyna
  6. Maximum likelihood estimation for Heston models By Matyas Barczy; Gyula Pap
  7. Unbiased QML Estimation of Log-GARCH Models in the Presence of Zero Returns By Sucarrat, Genaro; Escribano, Alvaro
  8. Testing for Structural Breaks in the Presence of Data Perturbations: Impacts and Wavelet Based Improvements By Reese, Simon; Li, Yushu
  9. Linear regression for panel with unknown number of factors as interactive fixed effects By Hyungsik Roger Moon; Martin Weidner
  10. Likelihood-based panel cointegration test in the presence of a linear time trend and cross-sectional dependence By Antonia Arsova; Deniz Dilan Karaman Oersal
  11. Novel Panel Cointegration Tests Emending for Cross-Section Dependence with N Fixed By Hadri, Kaddour; Kurozumi, Eiji; Rao, Yao
  12. Robust block bootstrap panel predictability tests By Westerlund J.; Smeekes S.
  13. A bootstrap test for instrument validity in heterogeneous treatment effect models By Toru Kitagawa
  14. Generalizing smooth transition autoregressions By Emilio Zanetti Chini
  15. A note on the identification in two equations probit model with dummy endogenous regressor By Romuald Meango; Ismael Mourifie
  16. Testing for Equilibrium Multiplicity in Dynamic Markov Games By Otsu, Taisuke; Pesendorfer, Martin; Takahashi, Yuya
  17. Hypothesis Testing for Arbitrary Bounds By Jeffrey Penney
  18. On the identification of structural linear functionals By Juan Carlos Escanciano; Wei Li
  19. To Hold Out or Not to Hold Out By Frank Schorfheide; Kenneth I. Wolpin
  20. Estimating the effects of credit constraints on productivity of Peruvian agriculture By Woutersen, Tiemen; Khandker, Shahidur R.
  21. Practical use of sensitivity in econometrics with an illustration to forecast combinations By Magnus, Jan R; Vasnev, Andrey
  22. Practical considerations for optimal weights in density forecast combi nation By Pauwels, Laurent; Vasnev, Andrey
  23. "Optimal Ridge-type Estimators of Covariance Matrix in High Dimension" By Tatsuya Kubokawa; Muni S. Srivastava
  24. Measuring correlations between non-stationary series with DCCA coefficient By Ladislav Kristoufek
  25. Graphical network models for international financial flows By Paolo Giudici; Alessandro Spelta
  26. Two-Sample Nonparametric Estimation of Intergenerational Income Mobili ty By Liu, Di; Murtazashvili, Irina; Prokhorov, Artem
  27. Modeling the coupled return-spread high frequency dynamics of large tick assets By Gianbiagio Curato; Fabrizio Lillo
  28. Identifying Structural Models of Committee Decisions with Heterogeneous Tastes and Ideological Bias By Yonghong An; Xun Tang
  29. Generalized method of moments with latent variables By Ron Gallant; Raffaella Giacomini; Giuseppe Ragusa
  30. The Origin of Fat Tails By Martin Gremm
  31. How to Identify and Forecast Bull and Bear Markets? By Kole, H.J.W.G.; Dijk, D.J.C. van

  1. By: Jinhyun Lee (University of St Andrews)
    Abstract: This paper proposes a novel way of testing exogeneity of an explanatory variable without any parametric assumptions in the presence of a "conditional" instrumental variable. A testable implication is derived that if an explanatory variable is endogenous, the conditional distribution of the outcome given the endogenous variable is not independent of its instrumental variable(s). The test rejects the null hypothesis with probability one if the explanatory variable is endogenous and it detects alternatives converging to the null at a rate n^{-1/2}. We propose a consistent nonparametric bootstrap test to implement this testable implication. We show that the proposed bootstrap test can be asymptotically justified in the sense that it produces asymptotically correct size under the null of exogeneity, and it has unit power asymptotically. Our nonparametric test can be applied to the cases in which the outcome is generated by an additively non-separable structural relation or in which the outcome is discrete, which has not been studied in the literature.
    URL: http://d.repec.org/n?u=RePEc:san:wpecon:1316&r=ecm
  2. By: Solberger M.; Zhou X. (GSBE)
    Abstract: We consider an exact factor model and derive a Lagrange multiplier-type test for unit roots in the idiosyncratic components. The asymptotic distribution of the statistic is derived under the misspecification that the differenced factors are white noise. We prove that the asymptotic distribution is independent of the distribution of the factors, and that the factors are allowed to be integrated, cointegrate, or be stationary. In a simulation study, size and power is compared with some popular second generation panel unit root tests. The simulations suggest that our statistic is well-behaved in terms of size and that it is powerful and robust in comparison with existing tests.
    Keywords: Hypothesis Testing: General; Single Equation Models; Single Variables: Models with Panel Data; Longitudinal Data; Spatial Time Series;
    JEL: C12 C23
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:dgr:umagsb:2013058&r=ecm
  3. By: Zhou X.; Solberger M. (GSBE)
    Abstract: Recent developments within the panel unit-root literature have illustrated how the exact factor model serves as a parsimonious framework and allows for consistent maximum likelihood inference even when it is misspecified contra the more general approximate factor model. In this paper we consider an exact factor model with AR1 dynamics and propose LM-type tests for idiosyncratic and common unit roots. We derive the asymptotic distributions and carry out simulations to investigate size and power of the tests in finite samples, as well as compare the performance with some existing tests.
    Keywords: Hypothesis Testing: General; Single Equation Models; Single Variables: Models with Panel Data; Longitudinal Data; Spatial Time Series;
    JEL: C12 C23
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:dgr:umagsb:2013059&r=ecm
  4. By: Audrino, Francesco; Camponovo, Lorenzo
    Abstract: We derive new theoretical results on the properties of the adaptive least absolute shrinkage and selection operator (adaptive lasso) for time series regression models. In particular we investigate the question of how to conduct finite sample inference on the parameters given an adaptive lasso model for some fixed value of the shrinkage parameter. Central in this study is the test of the hypothesis that a given adaptive lasso parameter equals zero, which therefore tests for a false positive. To this end we construct a simple (conservative) testing procedure and show, theoretically and empirically through extensive Monte Carlo simulations, that the adaptive lasso combines efficient parameter estimation, variable selection, and valid finite sample inference in one step. Moreover, we analytically derive a bias correction factor that is able to significantly improve the empirical coverage of the test on the active variables. Finally, we apply the introduced testing procedure to investigate the relation between the short rate dynamics and the economy, thereby providing a statistical foundation (from a model choice perspective) to the classic Taylor rule monetary policy model.
    Keywords: Adaptive lasso; Time series; Oracle properties; Finite sample inference; Taylor rule monetary policy model.
    JEL: C12 C22 E43
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:usg:econwp:2013:27&r=ecm
  5. By: Maciejowska, Katarzyna
    Abstract: In this article, a new approach for model specification is proposed. The method allows to choose the correct order of a mixture model by testing, if a particular mixture component is significant. The hypotheses are set in a new way, in order to avoid identification problems, which are typical for mixture models. If some of the parameters are known, the distribution of the LR statistic is Chi2, with the degrees of freedom depending on the number of components and the number of parameters in each component. The advantage of the new approach is its simplicity and computational feasibility.
    Keywords: normal mixture models, likelihood ratio test, homogeneity test, hypotheses setting
    JEL: C10 C12
    Date: 2013–10–16
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:50303&r=ecm
  6. By: Matyas Barczy; Gyula Pap
    Abstract: We study asymptotic properties of maximum likelihood estimators for Heston models based on continuous time observations. We distinguish three cases: subcritical (also called ergodic), critical and supercritical.
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1310.4783&r=ecm
  7. By: Sucarrat, Genaro; Escribano, Alvaro
    Abstract: A critique that has been directed towards the log-GARCH model is that its log-volatility specification does not exist in the presence of zero returns. A common ``remedy" is to replace the zeros with a small (in the absolute sense) non-zero value. However, this renders Quasi Maximum Likelihood (QML) estimation asymptotically biased. Here, we propose a solution to the case where actual returns are equal to zero with probability zero, but zeros nevertheless are observed because of measurement error (due to missing values, discreteness approximisation error, etc.). The solution treats zeros as missing values and handles these by combining QML estimation via the ARMA representation with the Expectation-maximisation (EM) algorithm. Monte Carlo simulations confirm that the solution corrects the bias, and several empirical applications illustrate that the bias-correcting estimator can make a substantial difference.
    Keywords: ARCH, exponential GARCH, log-GARCH, ARMA, Expectation-Maximisation (EM)
    JEL: C22 C58
    Date: 2013–09–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:50699&r=ecm
  8. By: Reese, Simon (Department of Economics, Lund University); Li, Yushu (Department of Business and Management Science, Norwegian School of Economics)
    Abstract: This paper investigates how classical measurement error and additive outliers influence tests for structural change based on F-statistics. We derive theoretically the impact of general additive disturbances in the regressors on the asymptotic distribution of these tests for structural change . The small sample properties in the case of classical measurement error and additive outliers are investigated via Monte Carlo simulations, revealing that sizes are biased upwards and that powers are reduced. Two wavelet based denoising methods are used to reduce these distortions. We show that these two methods can significantly improve the performance of structural break tests.
    Keywords: Structural breaks; measurement error; additive outliers; wavelet transform; empirical Bayes thresholding
    JEL: C11 C12 C15
    Date: 2013–10–11
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2013_036&r=ecm
  9. By: Hyungsik Roger Moon; Martin Weidner (Institute for Fiscal Studies and UCL)
    Abstract: In this paper we study the least sqares (LS) estimator in a linear panel regression model with interactive fixed effects for asymptotics where both the number of time periods and the number of cross-sectional units go to infinity. Under appropriate assumptions we show that the limiting distribution of the LS estimator for the regression coefficients is independent of the number of interactive fixed effects used in the estimation, as long as this number does not fall below the true number of interactive fixed effects present in the data. The important practical implication of this result is that for inference on the regression coefficients one does not necessarily need to estimate the number of interactive effects consistently, but can rely on an upper bound of this number to calculate the LS estimator. Supplementary material for this paper is available here.
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:49/13&r=ecm
  10. By: Antonia Arsova (Leuphana University Lueneburg, Germany); Deniz Dilan Karaman Oersal (Leuphana University Lueneburg, Germany)
    Abstract: This paper proposes a new likelihood-based panel cointegration rank test which extends the test of Oersal and Droge (2012) (henceforth Panel SL test) to allow for crosssectional dependence. The dependence is modelled by unobserved common factors which affect the variables in each cross-section through heterogeneous loadings. The common components are estimated following the panel analysis of nonstationarity in idiosyncratic and common components (PANIC) approach of Bai and Ng (2004) and the estimates are subtracted from the observations. The cointegrating rank of the defactored data is then tested by the Panel SL test. A Monte Carlo study demonstrates that the proposed testing procedure has reasonable size and power properties in finite samples.
    Keywords: panel cointegration rank test, cross-sectional dependence, common factors, likelihoodratio, time trend
    JEL: C12 C15 C33
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:lue:wpaper:280&r=ecm
  11. By: Hadri, Kaddour; Kurozumi, Eiji; Rao, Yao
    Abstract: In this paper, we propose new cointegration tests for single equations and panels. In both cases, the asymptotic distributions of the tests, which are derived with N fixed and T going to infinity, are shown to be standard normals. The effects of serial correlation and cross-sectional dependence are mopped out via long-run variances. An effective bias correction is derived which is shown to work well in finite samples; particularly when N is smaller than T. Our panel tests are robust to possible cointegration across units.
    Keywords: cointegration, panel cointegration, cross-section dependence, bias correction, DOLS, FCLT
    JEL: C12 C15 C22 C23
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:hit:econdp:2013-12&r=ecm
  12. By: Westerlund J.; Smeekes S. (GSBE)
    Abstract: Most panel data studies of the predictability of returns presume that the cross-sectional units are independent, an assumption that is not realistic. As a response to this, the current paper develops block bootstrap-based panel predictability tests that are valid under very general conditions. Some of the allowable features include heterogeneous predictive slopes, persistent predictors, and complex error dynamics, including cross-unit endogeneity.
    Keywords: Statistical Simulation Methods: General; Single Equation Models; Single Variables: Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models; Single Equation Models; Single Variables: Models with Panel Data; Longitudinal Data; Spatial Time Series; Financial Crises; Asset Pricing; Trading volume; Bond Interest Rates;
    JEL: C15 C22 C23 G01 G12
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:dgr:umagsb:2013060&r=ecm
  13. By: Toru Kitagawa (Institute for Fiscal Studies and University College London)
    Abstract: This paper develops a specification test for the instrument validity conditions in the heterogeneous treatment effect model with a binary treatment and a discrete instrument. A necessary testable implication for the joint restriction of instrument exogeneity and instrument monotonicity is given by nonnegativity of point-identifiable complier's outcome densities. Our specification test infers this testable implication using a Kolmogorov-Smirnov type test statistic. We provide a bootstrap algorithm to implement the proposed test and show its asymptotic validity. The proposed test procedure can apply to both discrete and continuous outcome cases.
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:53/13&r=ecm
  14. By: Emilio Zanetti Chini (University of Rome "Tor Vergata")
    Abstract: This paper introduces a variant of the smooth transition autoregression (STAR). The proposed model is able to parametrize the asymmetry in the tails of the transition equation by using a particular generalization of the logistic function. The null hypothesis of symmetric adjustment toward a new regime is tested by building two different LM-type tests. The first one maintains the original parametrization, while the second one is based on a third-order expanded auxiliary regression. Three diagnostic tests for no error autocorrelation, no additive asymmetry and parameter constancy are also discussed. The empirical size and power of the new symmetry as well as diagnostic tests are investigated by an extensive Monte Carlo experiment. An empirical application of the so generalized STAR (GSTAR) model to four economic time series reveals that the asymmetry in the transition between two regimes is a feature to be considered for economic analysis.
    Date: 2013–10–15
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:294&r=ecm
  15. By: Romuald Meango; Ismael Mourifie
    Abstract: This paper deals with the question whether exclusion restrictions on the exogenous regressors are necessary to identify two equation probit models with endogenous dummy regressor. Contradictory opinions have been exposed in the literature on the necessity of an exclusion restriction. Wilde (2000) argued that an exclusion restriction is not necessary, and proposed a simple criterion for identification in this model. We contradict his result, and show how the inherent incompleteness of the model leads to failure of (point) identication. We provide an exact identification proof when an exclusion restriction is available.
    Keywords: Probit model, Endogenous dummy regressor, Partial identification
    JEL: C35
    Date: 2013–10–14
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-503&r=ecm
  16. By: Otsu, Taisuke; Pesendorfer, Martin; Takahashi, Yuya
    Abstract: This paper proposes several statistical tests for finite state Markov games to examine the null hypothesis that the data are generated from a single equilibrium. We formulate tests of (i) the conditional choice probabilities, (ii) the steady-state distribution of states and (iii) the conditional distribution of states conditional on an initial state. In a Monte Carlo study we find that the chi-squared test of the steady-state distribution performs well and has high power even with a small number of markets and time periods. We apply the chi-squared test to the empirical application of Ryan (2012) that analyzes dynamics of the U.S. Portland Cement industry and test if his assumption of single equilibrium is supported by the data.
    Keywords: Dynamic Markov Game; Multiplicity of Equilibria; Testing
    JEL: C12 C72 D44
    Date: 2013–04
    URL: http://d.repec.org/n?u=RePEc:trf:wpaper:423&r=ecm
  17. By: Jeffrey Penney (Queen's University)
    Abstract: I derive a rigorous method to help determine whether a true parameter takes a value between two arbitrarily chosen points for a given level of confidence via a multiple testing procedure which strongly controls the familywise error rate. For any test size, the distance between the upper and lower bounds can be made smaller than that created by a confidence interval. The procedure is more powerful than other multiple testing methods that test the same hypothesis. This test can be used to provide an affirmative answer about the existence of a negligible effect.
    Keywords: familywise error, multiple testing, null effect, partial identification, precise zero
    JEL: C12 C18
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1319&r=ecm
  18. By: Juan Carlos Escanciano; Wei Li
    Abstract: This paper asks which aspects of a structural Nonparametric Instrumental Variables Regression (NPIVR) can be identified well and which ones cannot. It contributes to answering this question by characterising the identified set of linear continuous functionals of the NPIVR under norm constraints. Each element of the identified set of NPIVR can be written as the sum of a common 'identifiable component' and an idiosyncratic 'unidentifiable component'. The identified set for any continuous linear functional is shown to be a closed interval, whose midpoint is the functional applied to the 'identifiable component'. The formula for the length of the identified set extends the popular omitted variables formula of classical linear regression. Some examples illustrate the wide applicability and utility of our identification result, including bounds and a new identification condition for point-evaluation functionals. The main ideas are illustrated with an empirical application of the effect of children on labour market outcomes.
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:48/13&r=ecm
  19. By: Frank Schorfheide (Department of Economics, University of Pennsylvania); Kenneth I. Wolpin (Department of Economics, University of Pennsylvania)
    Abstract: A recent literature has developed that combines two prominent empirical approaches to ex ante policy evaluation: randomized controlled trials (RCT) and structural estimation. The RCT provides a “gold-standard" estimate of a particular treatment, but only of that treatment. Structural estimation provides the capability to extrapolate beyond the experimental treatment, but is based on untestable assumptions and is subject to structural data mining. Combining the approaches by holding out from the structural estimation exercise either the treatment or control sample allows for external validation of the underlying behavioral model. Although intuitively appealing, this holdout methodology is not well grounded. For instance, it is easy to show that it is suboptimal from a Bayesian perspective. Using a stylized representation of a randomized controlled trial, we provide a formal rationale for the use of a holdout sample in an environment in which data mining poses an impediment to the implementation of the ideal Bayesian analysis and a numerical illustration of the potential benefits of holdout samples.
    Keywords: Bayesian Analysis, Model Selection, Principal-Agent Models, Randomized Controlled Trials
    JEL: C11 C31 C52
    Date: 2013–10–14
    URL: http://d.repec.org/n?u=RePEc:pen:papers:13-059&r=ecm
  20. By: Woutersen, Tiemen; Khandker, Shahidur R.
    Abstract: This paper proposes an estimator for the endogenous switching regression models with fixed effects. The estimator allows for endogenous selection and for conditional heteroscedasticity in the outcome equation. Applying the estimator to a dataset on the productivity in agriculture substantially changes the conclusions compared to earlier analysis of the same dataset. This paper proposes an estimator for the endogenous switching re-gression models with fixed effects. The estimator allows for endogenous selection and for conditional heteroscedasticity in the outcome equation. Applying the estimator to a dataset on the productivity in agriculture substantially changes the conclusions compared to earlier analysis of the same dataset.
    Keywords: Economic Theory&Research,E-Business,Knowledge for Development,Econometrics,Labor Policies
    Date: 2013–10–01
    URL: http://d.repec.org/n?u=RePEc:wbk:wbrwps:6658&r=ecm
  21. By: Magnus, Jan R; Vasnev, Andrey
    Abstract: Sensitivity analysis is important for its own sake and also in combination with diagnostic testing. We consider the question how to use sensitivity statistics in practice, in particular how to judge whether sensitivity is large or small. For this purpose we distinguish between absolute and relative sensitivity and highlight the context-dependent nature of any sensitivity analysis. Relative sensitivity is then applied in the context of forecast combination and sensitivity-based weights are introduced. All concepts are illustrated through the European yield curve. In this context it is natural to look at sensitivity to autocorrelation and normality assumptions. Different forecasting models are combined with equal, fit-based and sensitivity-based weights, and compared with the multivariate and random walk benchmarks. We show that the fit-based weights and the sensitivity-based weights are complementary. For long-term maturities the sensitivity-based weights perform better than other weights.
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/8964&r=ecm
  22. By: Pauwels, Laurent; Vasnev, Andrey
    Abstract: The problem of finding appropriate weights to combine several density forecasts is an important issue currently debated in the forecast combination literature. Recently, a paper by Hall and Mitchell (IJF, 2007) proposes to combine density forecasts with optimal weights obtained from solving an optimization problem. This paper studies the properties of this optimization problem when the number of forecasting periods is relatively small and finds that it often produces corner solutions by allocating all the weight to one density forecast only. This paper's practical recommendation is to have an additional training sample period for the optimal weights. While reserving a portion of the data for parameter estimation and making pseudo-out-of-sample forecasts are common practices in the empirical literature, employing a separate training sample for the optimal weights is novel, and it is suggested because it decreases the chances of corner solutions. Alternative log-score or quadratic-score weighting schemes do not have this training sample requirement. January
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/8932&r=ecm
  23. By: Tatsuya Kubokawa (Faculty of Economics, University of Tokyo); Muni S. Srivastava (Department of Statistics, University of Toronto)
    Abstract:    The problem of estimating the covariance matrix of normal and non-normal distributions is addressed when both the sample size and the dimension of covariance matrix tend to innity. In this paper, we consider a class of ridge-type estimators which are linear combinations of the unbiased estimator and the identity matrix multiplied by a scalor statistic, and we derive a leading term of their risk functions relative to a quadratic loss function. Within this class, we obtain the optimal ridge-type estimator by minimizing the leading term in the risk approximation. It is interesting to note that the optimal weight is based on a statistic for testing sphericity of the covariance matrix.
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2013cf906&r=ecm
  24. By: Ladislav Kristoufek
    Abstract: In this short report, we investigate the ability of the DCCA coefficient to measure correlation level between non-stationary series. Based on a wide Monte Carlo simulation study, we show that the DCCA coefficient can estimate the correlation coefficient accurately regardless the strength of non-stationarity (measured by the fractional differencing parameter $d$). For a comparison, we also report the results for the standard Pearson's correlation coefficient. The DCCA coefficient dominates the Pearson's coefficient for non-stationary series.
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1310.3984&r=ecm
  25. By: Paolo Giudici (Department of Economics and Management, University of Pavia); Alessandro Spelta (Department of Economics and Management, University of Pavia)
    Abstract: The late-2000s financial crisis has stressed the need of understanding the world financial system as a network of countries, where cross-border financial linkages play a fundamental role in the spread of systemic risks. Financial network models, that take into account the complex interrelationships between countries, seem to be an appropriate tool in this context. In this paper we propose to enrich the topological perspective of network models with a more structured statistical framework, that of graphical Gaussian models, which can be employed to accurately estimate the adjacency matrix, the main input for the estimation of the interconnections between different countries. We consider different types of graphical models: besides classical ones, we introduce Bayesian graphical models, that can take model uncertainty into account, and dynamic Bayesian graphical models, that provide a convenient framework to model temporal cross-border data, decomposing the model into autoregressive and contemporaneous networks. The paper shows how the application of the proposed models to the Bank of International Settlements locational banking statistics allows the identification of four distinct groups of countries, that can be considered central in systemic risk contagion.
    Keywords: Financial network models, Graphical models, Bayesian model selection
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0052&r=ecm
  26. By: Liu, Di; Murtazashvili, Irina; Prokhorov, Artem
    Abstract: We estimate intergenerational income mobility in the USA and Sweden. To measure the degree to which income status is transmitted from one generation to another we propose a nonparametric estimator, which is particularly relevant for cross-country comparisons. Our approach allows intergenerational mobility to vary across observable family characteristics. Furthermore, it ts situations when data on fathers and sons come from di fferent samples. Finally, our estimator is consistent in the presence of measurement error in fathers' long-run economic status. We fi nd that family background captured by fathers' education matters for intergenerational income persistence in the USA more than in Sweden suggesting that the character of inequality in the two countries is rather di fferent.
    Keywords: GMM estimation; intergenerational income mobility
    Date: 2013–08–07
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/9293&r=ecm
  27. By: Gianbiagio Curato; Fabrizio Lillo
    Abstract: Large tick assets, i.e. assets where one tick movement is a significant fraction of the price and bid-ask spread is almost always equal to one tick, display a dynamics in which price changes and spread are strongly coupled. We introduce a Markov-switching modeling approach for price change, where the latent Markov process is the transition between spreads. We then use a finite Markov mixture of logit regressions on past squared returns to describe the dependence of the probability of price changes. The model can thus be seen as a Double Chain Markov Model. We show that the model describes the shape of return distribution at different time aggregations, volatility clustering, and the anomalous decrease of kurtosis of returns. We calibrate our models on Nasdaq stocks and we show that this model reproduces remarkably well the statistical properties of real data.
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1310.4539&r=ecm
  28. By: Yonghong An (Department of Economics, University of Connecticut); Xun Tang (Department of Economics, University of Pennsylvania)
    Abstract: We study the nonparametric identification and estimation of a structural model for committee decisions. Members of a committee share a common information set, but differ in ideological bias while processing multiple information sources and in individual tastes while weighing multiple objectives. We consider two cases of the model where committee members have or don't have strategic incentives for making recommendations that conform with the committee decision. For both cases, pure-strategy Bayesian Nash equilibria exist, and we show how to use variations in the common information set to recover the distribution of members' private types from individual recommendation patterns. Building on the identification result, we estimate a structural model of interest rate decisions by the Monetary Policy Committee (MPC) at the Bank of England. We find some evidence that recommendations from external committee members are less distorted by strategic incentives than internal members. There is also evidence that MPC members differ more in their tastes for multiple objectives than in ideological bias.
    Keywords: Committee decisions, nonparametric identification, MPC at the Bank of England
    JEL: C14 D71
    Date: 2013–10–07
    URL: http://d.repec.org/n?u=RePEc:pen:papers:13-058&r=ecm
  29. By: Ron Gallant (Institute for Fiscal Studies and Duke University); Raffaella Giacomini (Institute for Fiscal Studies and UCL); Giuseppe Ragusa
    Abstract: The contribution of generalized method of moments (Hansen and Singleton, 1982) was to allow frequentist inference regarding the parameters of a nonlinear structural model without having to solve the model, provided there were no latent variables. The contribution of this paper is the same with latent variables.
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:50/13&r=ecm
  30. By: Martin Gremm
    Abstract: We propose a random walk model of asset returns where the parameters depend on market stress. Stress is measured by, e.g., the value of an implied volatility index. We show that model parameters including standard deviations and correlations can be estimated robustly and that all distributions are approximately normal. Fat tails in observed distributions occur because time series sample different stress levels and therefore different normal distributions. This provides a quantitative description of the observed distribution including the fat tails. We discuss simple applications in risk management and portfolio construction
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1310.4538&r=ecm
  31. By: Kole, H.J.W.G.; Dijk, D.J.C. van
    Abstract: The state of the equity market, often referred to as a bull or a bear market, is of key importance for financial decisions and economic analyses. Its latent nature has led to several methods to identify past and current states of the market and forecast future states. These methods encompass semi-parametric rule-based methods and parametric regime-switching models. We compare these methods by new statistical and economic measures that take into account the latent nature of the market state. The statistical measure is based directly on the predictions, while the economic mea- sure is based on the utility that results when a risk-averse agent uses the predictions in an investment decision. Our application of this framework to the S&P500 shows that rule-based methods are preferable for (in-sample) identification of the market state, but regime-switching models for (out-of-sample) forecasting. In-sample only the direction of the market matters, but for forecasting both means and volatilities of returns are important. Both the statistical and the economic measures indicate that these differences are significant.
    Keywords: forecast evaluation;regime switching;stock market;economic comparison
    Date: 2013–10–14
    URL: http://d.repec.org/n?u=RePEc:dgr:eureri:1765041558&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.