nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒07‒09
25 papers chosen by
Sune Karlsson
Örebro universitet

  1. Semiparametrically Point-Optimal Hybrid Rank Tests for Unit Roots By Bo Zhou; Ramon van den Akker; Bas J. M. Werker
  2. Ill-posed Estimation in High-Dimensional Models with Instrumental Variables By Christoph Breunig; Enno Mammen; Anna Simoni
  3. Integrated Likelihood Based Inference for Nonlinear Panel Data Models with Unobserved Effects By martin Schumann; Thomas A. Severini; Gautam Tripathi
  4. Factor models for portfolio selection in large dimensions: the good, the better and the ugly By Gianluca De Nard; Olivier Ledoit; Michael Wolf
  5. Model Selection in Time Series Analysis: Using Information Criteria as an Alternative to Hypothesis Testing By R. Scott Hacker; Abdulnasser Hatemi-J
  6. Testing DSGE Models by indirect inference: a survey of recent findings By Meenagh, David; Minford, Patrick; Wickens, Michael; Xu, Yongdeng
  7. Stochastic model specification in Markov switching vector error correction models By Florian Huber; Michael Pfarrhofer; Thomas O. Z\"orner
  8. Determining the dimension of factor structures in non-stationary large datasets By Matteo Barigozzi; Lorenzo Trapani
  9. Leave-out estimation of variance components By Patrick Kline; Raffaele Saggio; Mikkel S{\o}lvsten
  10. Testing for the Conditional Geometric Mixture Distribution By JIN SEO CHO; JIN SEOK PARK; SANG WOO PARK
  11. An IV framework for combining sign and long?run parametric restrictions in SVARs By Lance A. Fisher; Hyeon?seung Huh
  12. Asymptotic Refinements of a Misspecification-Robust Bootstrap for Generalized Empirical Likelihood Estimators By Seojeong Lee
  13. On endogeneity and shape invariance in extended partially linear single index models By Jiti Gao; Namhyun Kim; Patrick W. Saart
  14. Estimation of the common component in Dynamic Factor Models By Peña Sánchez de Rivera, Daniel; Caro Navarro, Ángela
  15. Identification in Nonparametric Models for Dynamic Treatment Effects By Sukjin Han
  16. Combining sign and parametric restrictions in SVARs by Givens Rotations By Lance A. Fisher; Hyeon-seung Huh
  17. Estimation of a Scale-Free Network Formation Model By Anton Kolotilin; Valentyn Panchenko
  18. Testing Cointegrating Relationships Using Irregular and Non-Contemporaneous Series with an Application to Paleoclimate Data By J. Isaac Miller
  19. Bootstrapping Mean Squared Errors of Robust Small-Area Estimators: Application to the Method-of-Payments Data By Valéry D. Jiongo; Pierre Nguimkeu
  20. Bayesian epidemic models for spatially aggregated count data By Malesios, C; Demiris, N; Kalogeropoulos, K; Ntzoufras, I
  21. A Machine Learning Framework for Stock Selection By XingYu Fu; JinHong Du; YiFeng Guo; MingWen Liu; Tao Dong; XiuWen Duan
  22. Unbiased Estimation of Competitive Balance in Sports Leagues with Unbalanced Schedules By Young Hoon Lee; Yongdai Kim; Sara Kim
  23. Common Values, Unobserved Heterogeneity, and Endogenous Entry in U.S. Offshore Oil Lease Auctions By Giovanni Compiani; Philip A. Haile; Marcelo Sant'Anna
  24. Power-law cross-correlations: Issues, solutions and future challenges By Ladislav Kristoufek
  25. Sensitivity of Regular Estimators By Yaroslav Mukhin

  1. By: Bo Zhou; Ramon van den Akker; Bas J. M. Werker
    Abstract: We propose a new class of unit root tests that exploits invariance properties in the Locally Asymptotically Brownian Functional limit experiment associated to the unit root model. The invariance structures naturally suggest tests that are based on the ranks of the increments of the observations, their average, and an assumed reference density for the innovations. The tests are semiparametric in the sense that they are valid, i.e., have the correct (asymptotic) size, irrespective of the true innovation density. For a correctly specified reference density, our test is point-optimal and nearly efficient. For arbitrary reference densities, we establish a Chernoff-Savage type result, i.e., our test performs as well as commonly used tests under Gaussian innovations but has improved power under other, e.g., fat-tailed or skewed, innovation distributions. To avoid nonparametric estimation, we propose a simplified version of our test that exhibits the same asymptotic properties, except for the Chernoff-Savage result that we are only able to demonstrate by means of simulations.
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1806.09304&r=ecm
  2. By: Christoph Breunig; Enno Mammen; Anna Simoni
    Abstract: This paper is concerned with inference about low-dimensional components of a high-dimensional parameter vector beta_0 which is identified through in- strumental variables. We allow for eigenvalues of the expected outer product of included and excluded covariates, denoted by M, to shrink to zero as the sample size increases. We propose a novel estimator based on desparsi- fication of an instrumental variable Lasso estimator, which is a regularized version of 2SLS with an additional correction term. This estimator converges to beta_0 at a rate depending on the mapping properties of M captured by a sparse link condition. Linear combinations of our estimator of beta_0 are shown to be asymptotically normally distributed. Based on consistent covariance estimation, our method allows for constructing confidence intervals and sta- tistical tests for single or low-dimensional components of beta_0. In Monte-Carlo simulations we analyze the finite sample behavior of our estimator.
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1806.00666&r=ecm
  3. By: martin Schumann (CREA, Université du Luxembourg); Thomas A. Severini (Northwestern University, Evanston, USA); Gautam Tripathi (CREA, Université du Luxembourg)
    Abstract: Panel data models with fixed effects are widely used by economists and other social scientists to capture the effects of unobserved individual heterogeneity. In this paper, we propose a new integrated likelihood based approach for estimating panel data models when the unobserved individual effects enter the model nonlinearly. Unlike existing integrated likelihoods in the literature, the one we propose is closer to a \genuine" likelihood. Although the statistical theory for the proposed estimator is developed in an asymptotic setting where the number of individuals and the number of time periods both approach infinity, results from a simulation study suggest that our methodology can work very well even in moderately sized panels of short duration in both static and dynamic models.
    Keywords: Fixed effects, Integrated likelihood, Nonlinear models, Panel data
    JEL: C23 C33
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:luc:wpaper:17-01&r=ecm
  4. By: Gianluca De Nard; Olivier Ledoit; Michael Wolf
    Abstract: This paper injects factor structure into the estimation of time-varying, large-dimensional covariance matrices of stock returns. Existing factor models struggle to model the covariance matrix of residuals in the presence of conditional heteroskedasticity in large universes. Conversely, rotation-equivariant estimators of large-dimensional time-varying covariance matrices forsake directional information embedded in market-wide risk factors. We introduce a new covariance matrix estimator that blends factor structure with conditional heteroskedasticity of residuals in large dimensions up to 1000 stocks. It displays superior all-around performance on historical data against a variety of state-of-the-art competitors, including static factor models, exogenous factor models, sparsity-based models, and structure-free dynamic models. This new estimator can be used to deliver more efficient portfolio selection and detection of anomalies in the cross-section of stock returns.
    Keywords: Dynamic conditional correlations, factor models, multivariate GARCH, Markowitz portfolio selection, nonlinear shrinkage
    JEL: C13 C58 G11
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:290&r=ecm
  5. By: R. Scott Hacker; Abdulnasser Hatemi-J
    Abstract: The issue of model selection in applied research is of vital importance. Since the true model in such research is not known, which model should be used from among various potential ones is an empirical question. There might exist several competitive models. A typical approach to dealing with this is classic hypothesis testing using an arbitrarily chosen significance level based on the underlying assumption that a true null hypothesis exists. In this paper we investigate how successful this approach is in determining the correct model for different data generating processes using time series data. An alternative approach based on more formal model selection techniques using an information criterion or cross-validation is suggested and evaluated in the time series environment via Monte Carlo experiments. This paper also explores the effectiveness of deciding what type of general relation exists between two variables (e.g. relation in levels or relation in first differences) using various strategies based on hypothesis testing and on information criteria with the presence or absence of unit roots.
    Date: 2018–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1805.08991&r=ecm
  6. By: Meenagh, David (Cardiff Business School); Minford, Patrick (Cardiff Business School); Wickens, Michael (Cardiff Business School); Xu, Yongdeng (Cardiff Business School)
    Abstract: We review recent findings in the application of Indirect Inference to DSGE models. We show that researchers should tailor the power of their test to the model under investigation in order to achieve a balance between high power and model tractability; this will involve choosing only a limited number of variables on whose behaviour they should focus. Also recent work reveals that it makes little difference which these variables are or how their behaviour is measured whether via A VAR, IRFs or Moments. We also review identification issues and whether alternative evaluation methods such as forecasting or Likelihood ratio tests are potentially helpful.
    Keywords: Pseudo-true inference, DSGE models, Indirect Inference; Wald tests, Likelihood Ratio tests; robustness
    JEL: C12 C32 C52 E1
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2018/14&r=ecm
  7. By: Florian Huber; Michael Pfarrhofer; Thomas O. Z\"orner
    Abstract: This paper proposes a hierarchical modeling approach to perform stochastic model specification in Markov switching vector error correction models. We assume that a common distribution gives rise to the regime-specific regression coefficients. The mean as well as the variances of this distribution are treated as fully stochastic and suitable shrinkage priors are used. These shrinkage priors enable to assess which coefficients differ across regimes in a flexible manner. In the case of similar coefficients, our model pushes the respective regions of the parameter space towards the common distribution. This allows for selecting a parsimonious model while still maintaining sufficient flexibility to control for sudden shifts in the parameters, if necessary. In the empirical application, we apply our modeling approach to Euro area data and assume that transition probabilities between expansion and recession regimes are driven by the cointegration errors. Our findings suggest that lagged cointegration errors have predictive power for regime shifts and these movements between business cycle stages are mostly driven by differences in error variances.
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1807.00529&r=ecm
  8. By: Matteo Barigozzi; Lorenzo Trapani
    Abstract: We propose a procedure to determine the dimension of the common factor space in a large, possibly non-stationary, dataset. Our procedure is designed to determine whether there are (and how many) common factors (i) with linear trends, (ii) with stochastic trends, (iii) with no trends, i.e. stationary. Our analysis is based on the fact that the largest eigenvalues of a suitably scaled covariance matrix of the data (corresponding to the common factor part) diverge, as the dimension $N$ of the dataset diverges, whilst the others stay bounded. Therefore, we propose a class of randomised test statistics for the null that the $p$-th eigenvalue diverges, based directly on the estimated eigenvalue. The tests only requires minimal assumptions on the data, and no restrictions on the relative rates of divergence of $N$ and $T$ are imposed. Monte Carlo evidence shows that our procedure has very good finite sample properties, clearly dominating competing approaches when no common factors are present. We illustrate our methodology through an application to US bond yields with different maturities observed over the last 30 years. A common linear trend and two common stochastic trends are found and identified as the classical level, slope and curvature factors.
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1806.03647&r=ecm
  9. By: Patrick Kline; Raffaele Saggio; Mikkel S{\o}lvsten
    Abstract: We propose a framework for unbiased estimation of quadratic forms in the parameters of linear models with many regressors and unrestricted heteroscedasticity. Applications include variance component estimation and tests of linear restrictions in hierarchical and panel models. We study the large sample properties of our estimator allowing the number of regressors to grow in proportion to the number of observations. Consistency is established in a variety of settings where jackknife bias corrections exhibit first-order biases. The estimator's limiting distribution can be represented by a linear combination of normal and non-central $\chi^2$ random variables. Consistent variance estimators are proposed along with a procedure for constructing uniformly valid confidence intervals. Applying a two-way fixed effects model of wage determination to Italian social security records, we find that ignoring heteroscedasticity substantially biases conclusions regarding the relative contribution of workers, firms, and worker-firm sorting to wage inequality. Monte Carlo exercises corroborate the accuracy of our asymptotic approximations, with clear evidence of non-normality emerging when worker mobility between groups of firms is limited.
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1806.01494&r=ecm
  10. By: JIN SEO CHO (Yonsei University); JIN SEOK PARK (Yonsei University); SANG WOO PARK (Yonsei University)
    Abstract: This study examines the mixture hypothesis of conditional geometric distributions using a likelihood ratio (LR) test statistic based on that used for unconditional geometric distributions. As such, we derive the null limit distribution of the LR test statistic and examine its power performance. In addition, we examine the interrelationship between the LR test statistics used to test the geometric and exponential mixture hypotheses. We also examine the performance of the LR test statistics under various conditions and confirm the main claims of the study using Monte Carlo simulations.
    Keywords: mixture of conditional geometric distributions, likelihood ratio test, unobserved heterogeneity, Gaussian stochastic process
    JEL: C12 C41 C80
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:yon:wpaper:2018rwp-123&r=ecm
  11. By: Lance A. Fisher (Macquarie University); Hyeon?seung Huh (Yonsei University)
    Abstract: This paper develops a method to impose a long?run restriction in an instrumental variables (IV) framework in a SVAR which is comprised of both I(1) and I(0) variables when the shock associated with one of the I(0) variables is made transitory. This is the identification which is utilized in the small open economy SVAR that we take from the literature. The method is combined with a recently developed sign restrictions approach which can be applied in an IV setting. We then consider an alternate identification in this SVAR which makes the shocks associated with all of the I(0) variables transitory. In this case, we show that another method can be used to impose the long?run restrictions. The results from both methods are reported for the SVARs estimated with Canadian data.
    Keywords: sign restrictions, long?run parametric restrictions, IV estimation, algorithms, generated coefficients, small open economy, Canada
    JEL: C32 C36 C51 F41
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:yon:wpaper:2018rwp-124&r=ecm
  12. By: Seojeong Lee
    Abstract: I propose a nonparametric iid bootstrap procedure for the empirical likelihood, the exponential tilting, and the exponentially tilted empirical likelihood estimators that achieves asymptotic refinements for t tests and confidence intervals, and Wald tests and confidence regions based on such estimators. Furthermore, the proposed bootstrap is robust to model misspecification, i.e., it achieves asymptotic refinements regardless of whether the assumed moment condition model is correctly specified or not. This result is new, because asymptotic refinements of the bootstrap based on these estimators have not been established in the literature even under correct model specification. Monte Carlo experiments are conducted in dynamic panel data setting to support the theoretical finding. As an application, bootstrap confidence intervals for the returns to schooling of Hellerstein and Imbens (1999) are calculated. The result suggests that the returns to schooling may be higher.
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1806.00953&r=ecm
  13. By: Jiti Gao; Namhyun Kim; Patrick W. Saart
    Abstract: In this paper, the important (but so far unrevealed) usefulness of the extended generalized partially linear single-index (EGPLSI) model introduced by Xia et al. (1999) in its ability to model a flexible shape-invariant specification is elaborated. More importantly, a control function approach is proposed to address the potential endogeneity problems in the EGPLSI model in order to enhance its applicability to empirical studies. In the process, it is shown that the attractive asymptotic features of the single-index type of a semiparametric model are still valid in our proposed estimation procedure given intrinsic generated covariates. Our newly developed method is then applied to address the endogeneity of expenditure in the semiparametric analysis of a system of empirical Engel curves by using the British data, highlights the convenient applicability of our proposed method.
    Keywords: Extended generalized partially linear single-index, control function approach, endogeneity, semiparametric regression models.
    JEL: C14 C18 C51
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2018-8&r=ecm
  14. By: Peña Sánchez de Rivera, Daniel; Caro Navarro, Ángela
    Abstract: One of the most effective techniques that allows a low-dimensional representation of Big Datasets is the Dynamic Factor Model (DFM). We analyze the finite sample performance of the well-known Principal Component estimator for the common component under different scenarios. Simulation results show that for data samples with large number of observations and small time series dimension, the variance-covariance matrix specification with lags provides better estimations than the classic variance-covariance matrix. However, in high-dimension data samples the classic variance-covariance matrix performs better no matter the sample size. Second, we apply the Principal Component estimator to obtain estimates of the business cycles of the Euro Area and its country members. This application, together with a cluster analysis, studies the phenomenon known as the Two-Speed Europe with two groups of countries not geographically related.
    Keywords: Time series ; Factor Models ; Principal Components ; Canonical Correlations
    Date: 2018–06–15
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:27047&r=ecm
  15. By: Sukjin Han
    Abstract: This paper develops a nonparametric model that represents how sequences of outcomes and treatment choices influence one another in a dynamic manner. In this setting, we are interested in identifying the average outcome for individuals in each period, had a particular treatment sequence been assigned. The identification of this quantity allows us to identify the average treatment effects (ATE's) and the ATE's on transitions, as well as the optimal treatment regimes, namely, the regimes that maximize the (weighted) sum of the average potential outcomes, possibly less the cost of the treatments. The main contribution of this paper is to relax the sequential randomization assumption widely used in the biostatistics literature by introducing a flexible choice-theoretic framework for a sequence of endogenous treatments. We show that the parameters of interest are identified under each period's two-way exclusion restriction, i.e., with instruments excluded from the outcome-determining process and other exogenous variables excluded from the treatment-selection process. We also consider partial identification in the case where the latter variables are not available. Lastly, we extend our results to a setting where treatments do not appear in every period.
    Date: 2018–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1805.09397&r=ecm
  16. By: Lance A. Fisher (Macquarie University); Hyeon-seung Huh (Yonsei University)
    Abstract: This paper develops a method for combining sign and parametric restrictions in SVARs by means of Givens matrices. The Givens matrix is used to rotate an initial set of orthogonal shocks in the SVAR. Parametric restrictions are imposed on the Givens matrix in a manner which utilises its properties. This gives rise to a system of equations which can be solved recursively for the ¡®angles¡¯ in the constituent Givens matrices to enforce the parametric restrictions. The method is applied to several identifications which involve a combination of sign restrictions, and long-run and/or contemporaneous restrictions in Peersman¡¯s (2005) SVAR for the US economy. The method is compared to the recently developed method of Aries, Rubio-Ramirez and Waggoner (2018) which combines zero and sign restrictions.
    Keywords: structural vector autoregressions, sign and parametric restrictions, Givens rotations, QR decomposition
    JEL: C32 C51 E32
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:yon:wpaper:2018rwp-122&r=ecm
  17. By: Anton Kolotilin (School of Economics, UNSW Business School); Valentyn Panchenko (School of Economics, UNSW Business School)
    Abstract: Growing evidence suggests that many social and economic networks are scale free in that their degree distribution has a power-law tail. A common explanation for this phenomenon is a random network formation process with preferential attachment. For a general version of such a process, we develop the pseudo maximum likelihood and generalized method of moments estimators. We prove consistency of these estimators by establishing the law of large numbers for growing networks. Simulations suggest that these estimators are asymptotically normally distributed and outperform the commonly used non-linear least squares and Hill (1975) estimators in finite samples. We apply our estimation methodology to a co-authorship network.
    Keywords: law of large numbers, consistency, degree distribution, scale-free network
    JEL: C15 C45 C51 D85
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:swe:wpaper:2018-10&r=ecm
  18. By: J. Isaac Miller (Department of Economics, University of Missouri, Columbia, Missouri, USA)
    Abstract: Time series that are observed neither regularly nor contemporaneously pose problems for most multivariate analyses. Common and intuitive solutions to these problems include linear and step interpolation or other types of imputation to a higher, regular frequency. However, interpolation is known to cause serious problems with the size and power of statistical tests. Due to the difficulty in measuring stochastically varying paleoclimate phenomena such as CO2 concentrations and surface temperatures, time series of such measurements are observed neither regularly nor contemporaneously. This paper presents large- and small-sample analyses of the size and power of cointegration tests of time series with these features and confirms the robustness of cointegration of these two series found in the extant literature. Step interpolation is preferred over linear interpolation.
    Keywords: cointegration, irregularly time series, non-contemporaneous time series, misaligned time series, paleoclimate data
    JEL: C12 C22 C32 Q54
    Date: 2018–06–29
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:1809&r=ecm
  19. By: Valéry D. Jiongo; Pierre Nguimkeu
    Abstract: This paper proposes a new bootstrap procedure for mean squared errors of robust small-area estimators. We formally prove the asymptotic validity of the proposed bootstrap method and examine its finite sample performance through Monte Carlo simulations. The results show that our procedure performs well and outperforms existing ones. We also apply our procedure to the estimation of the total volume and value of cash, debit card and credit card transactions in Canada as well as in its provinces and subgroups of households. In particular, we find that there is a significant average annual decline rate of 3.1 percent in the volume of cash transactions, and that this decline is relatively higher among high-income households living in heavily populated provinces. Our bootstrap estimator also provides indicators of quality useful in selecting the best small-area predictors from among several alternatives in practice.
    Keywords: Econometric and statistical methods, Bank notes
    JEL: C13 C15 C83 E E41
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:18-28&r=ecm
  20. By: Malesios, C; Demiris, N; Kalogeropoulos, K; Ntzoufras, I
    Abstract: Epidemic data often possess certain characteristics, such as the presence of many zeros, the spatial nature of the disease spread mechanism, environmental noise, serial correlation and dependence on time varying factors. This paper addresses these issues via suitable Bayesian modelling. In doing so we utilise a general class of stochastic regression models appropriate for spatio-temporal count data with an excess number of zeros. The developed regression framework does incorporate serial correlation and time varying covariates through an Ornstein Uhlenbeck process formulation. In addition, we explore the effect of different priors, including default options and variations of mixtures of g-priors. The effect of different distance kernels for the epidemic model component is investigated. We proceed by developing branching process-based methods for testing scenarios for disease control, thus linking traditional epidemiological models with stochastic epidemic processes, useful in policy-focused decision making. The approach is illustrated with an application to a sheep pox dataset from the Evros region, Greece.
    Keywords: Bayesian modelling; Bayesian variable selection; branching process; epidemic extinction; g-prior; spatial kernel; disease control
    JEL: C1
    Date: 2017–06–12
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:77939&r=ecm
  21. By: XingYu Fu; JinHong Du; YiFeng Guo; MingWen Liu; Tao Dong; XiuWen Duan
    Abstract: This paper demonstrates how to apply machine learning algorithms to distinguish good stocks from the bad stocks. To this end, we construct 244 technical and fundamental features to characterize each stock, and label stocks according to their ranking with respect to the return-to-volatility ratio. Algorithms ranging from traditional statistical learning methods to recently popular deep learning method, e.g. Logistic Regression (LR), Random Forest (RF), Deep Neural Network (DNN), and the Stacking, are trained to solve the classification task. Genetic Algorithm (GA) is also used to implement feature selection. The effectiveness of the stock selection strategy is validated in Chinese stock market in both statistical and practical aspects, showing that: 1) Stacking outperforms other models reaching an AUC score of 0.972; 2) Genetic Algorithm picks a subset of 114 features and the prediction performances of all models remain almost unchanged after the selection procedure, which suggests some features are indeed redundant; 3) LR and DNN are radical models; RF is risk-neutral model; Stacking is somewhere between DNN and RF. 4) The portfolios constructed by our models outperform market average in back tests.
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1806.01743&r=ecm
  22. By: Young Hoon Lee (Department of Economics, Sogang University, Seoul); Yongdai Kim (Department of Statistics, Seoul National University); Sara Kim (Department of Statistics, Seoul National University)
    Abstract: Many empirical studies on competitive balance (CB) use the ratio of the actual standard deviation to the idealized standard deviation of win percentages (RSD). This paper suggests that empirical studies that use RSD to compare CB among different leagues are invalid, but that RSD may be used for time-series analysis on CB in a league if there are no changes in season length. When schedules are unbalanced and/or include interleague games, the final winning percentage is a biased estimator of the true win probability. This paper takes a mathematical statistical approach to derive an unbiased estimator of within-season CB that can be applied to not only balanced but also unbalanced schedules. Simulations and empirical applications are also presented.
    Keywords: Competitive Balance, Unbalanced Schedule, Unbiased Estimation
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:sgo:wpaper:1801&r=ecm
  23. By: Giovanni Compiani (niversity of California, Berkeley, Haas School of Business); Philip A. Haile (Cowles Foundation, Yale University); Marcelo Sant'Anna (FGV EPGE)
    Abstract: An oil lease auction is the classic example motivating a common values model. However, formal testing for common values has been hindered by unobserved auction-level heterogeneity, which is likely to affect both participation in an auction and bidders' willingness to pay. We develop and apply an empirical approach for first-price sealed bid auctions with affiliated values, unobserved heterogeneity, and endogenous bidder entry. The approach also accommodates spatial dependence and sample selection. Following Haile, Hong and Shum (2003), we specify a reduced form for bidder entry outcomes and rely on an instrument for entry. However, we relax their control function requirements and demonstrate that our specification is generated by a fully specified game motivated by our application. We show that important features of the model are nonparametrically identified and propose a semiparametric estimation approach designed to scale well to the moderate sample sizes typically encountered in practice. Our empirical results show that common values, affiliated private information, and unobserved heterogeneity - three distinct phenomena with different implications for policy and empirical work - are all present and important in U.S. offshore oil and gas lease auctions. We find that ignoring unobserved heterogeneity in the empirical model obscures the presence of common values. We also examine the interaction between affiliation, the winner's curse, and the number of bidders in determining the aggressiveness of bidding and seller revenue.
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2137&r=ecm
  24. By: Ladislav Kristoufek
    Abstract: Analysis of long-range dependence in financial time series was one of the initial steps of econophysics into the domain of mainstream finance and financial economics in the 1990s. Since then, many different financial series have been analyzed using the methods standardly used outside of finance to deliver some important stylized facts of the financial markets. In the late 2000s, these methods have started being generalized to bivariate settings so that the relationship between two series could be examined in more detail. It was then only a single step from bivariate long-range dependence towards scale-specific correlations and regressions as well as power-law coherency as a unique relationship between power-law correlated series. Such rapid development in the field has brought some issues and challenges that need further discussion and attention. We shortly review the development and historical steps from long-range dependence to bivariate generalizations and connected methods, focus on its technical aspects and discuss problematic parts and challenges for future directions in this specific subfield of econophysics.
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1806.01616&r=ecm
  25. By: Yaroslav Mukhin
    Abstract: This paper studies local asymptotic relationship between two scalar estimates. We define sensitivity of a target estimate to a control estimate to be the directional derivative of the target functional with respect to the gradient direction of the control functional. Sensitivity according to the information metric on the model manifold is the asymptotic covariance of regular efficient estimators. Sensitivity according to a general policy metric on the model manifold can be obtained from influence functions of regular efficient estimators. Policy sensitivity has a local counterfactual interpretation, where the ceteris paribus change to a counterfactual distribution is specified by the combination of a control parameter and a Riemannian metric on the model manifold.
    Date: 2018–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1805.08883&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.