nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒11‒11
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Model Specification Test with Unlabeled Data: Approach from Covariate Shift By Masahiro Kato; Hikaru Kawarazaki
  2. Comprehensive Testing of Linearity against the Smooth Transition Autoregressive Model By Dakyung Seong; Jin Seo Cho; Timo Teräsvirta
  3. The Fourier Transform Method for Volatility Functional Inference by Asynchronous Observations By Richard Y. Chen
  4. Modelling bid-ask spread conditional distributions using hierarchical correlation reconstruction By Jaros{\l}aw Duda; Robert Syrek; Henryk Gurgul
  5. BLP Estimation Using Laplace Transformation and Overlapping Simulation Draws By Hong, Han; Li, Huiyu; Li, Jessie
  6. Demand Analysis with Many Prices By Victor Chernozhukov; Jerry A. Hausman; Whitney K. Newey
  7. Regularized Quantile Regression with Interactive Fixed Effects By Junlong Feng
  8. Nonparametric Quantile Regressions for Panel Data Models with Large T By Liang Chen
  9. Towards an Experimental Framework for Assessing Meta-Analysis Methods, with a Focus on Andrews-Kasy Estimators By Sanghyun Hong; W. Robert Reed
  10. Long monthly temperature series and the Vector Seasonal Shifting Mean and Covariance Autoregressive model By Changli He; Jian Kang; Timo Teräsvirta; Shuhua Zhang
  11. Binary Conditional Forecasts By McCracken, Michael W.; McGillicuddy, Joseph; Owyang, Michael T.
  12. Residual Augmented Fourier ADF Unit Root Test By Yilanci, Veli; Aydin, Mücahit; Aydin, Mehmet
  13. A two-dimensional propensity score matching method for longitudinal quasi-experimental studies: A focus on travel behavior and the built environment By Haotian Zhong; Wei Li; Marlon G. Boarnet
  14. Assessing International Commonality in Macroeconomic Uncertainty and Its Effects By Carriero, Andrea; Clark, Todd E.; Marcellino, Massimiliano
  15. The Role of Factor Strength and Pricing Errors for Estimation and Inference in Asset Pricing Models By M. Hashem Pesaran; Ron P. Smith

  1. By: Masahiro Kato; Hikaru Kawarazaki
    Abstract: We propose a novel framework of the model specification test in regression using unlabeled test data. In many cases, we have conducted statistical inferences based on the assumption that we can correctly specify a model. However, it is difficult to confirm whether a model is correctly specified. To overcome this problem, existing works have devised statistical tests for model specification. Existing works have defined a correctly specified model in regression as a model with zero conditional mean of the error term over train data only. Extending the definition in conventional statistical tests, we define a correctly specified model as a model with zero conditional mean of the error term over any distribution of the explanatory variable. This definition is a natural consequence of the orthogonality of the explanatory variable and the error term. If a model does not satisfy this condition, the model might lack robustness with regards to the distribution shift. The proposed method would enable us to reject a misspecified model under our definition. By applying the proposed method, we can obtain a model that predicts the label for the unlabeled test data well without losing the interpretability of the model. In experiments, we show how the proposed method works for synthetic and real-world datasets.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.00688&r=all
  2. By: Dakyung Seong (University of California); Jin Seo Cho (Yonsei University); Timo Teräsvirta (Aarhus University and CREATES)
    Abstract: This paper examines the null limit distribution of the quasi-likelihood ratio (QLR) statistic that tests linearity condition using the smooth transition autoregressive (STAR) model. We explicitly show that the QLR test statistic weakly converges to a functional of a Gaussian stochastic process under the null of linearity by resolving the issue of twofold identification meaning that Davies’s (1977, 1987) identification problem arises in two different ways under the null. We illustrate our theory using the exponential STAR and logistic STAR models and also conduct Monte Carlo simulations. Finally, we test for neglected nonlinearity in the German money demand, growth rates of US unemployment, and German industrial production. These empirical examples also demonstrate that the QLR test statistic complements the linearity test of the Lagrange multiplier test statistic in Teräsvirta (1994).
    Keywords: QLR test statistic, STAR model, linearity test, Gaussian process, null limit distribution, nonstandard testing problem
    JEL: C12 C18 C46 C52
    Date: 2019–11–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2019-17&r=all
  3. By: Richard Y. Chen
    Abstract: We study the volatility functional inference by Fourier transforms. This spectral framework is advantageous in that it harnesses the power of harmonic analysis to handle missing data and asynchronous observations without any artificial time alignment nor data imputation. Under conditions, this spectral approach is consistent and we provide limit distributions using irregular and asynchronous observations. When observations are synchronous, the Fourier transform method for volatility functionals attains both the optimal convergence rate and the efficient bound in the sense of Le Cam and H\'ajek. Another finding is asynchronicity or missing data as a form of noise produces "interference" in the spectrum estimation and impacts on the convergence rate of volatility functional estimators. This new methodology extends previous applications of volatility functionals, including principal component analysis, generalized method of moments, continuous-time linear regression models et cetera, to high-frequency datasets of which asynchronicity is a prevailing feature.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.02205&r=all
  4. By: Jaros{\l}aw Duda; Robert Syrek; Henryk Gurgul
    Abstract: While we would like to predict exact values, available incomplete information is rarely sufficient - usually allowing only to predict conditional probability distributions. This article discusses hierarchical correlation reconstruction (HCR) methodology for such prediction on example of usually unavailable bid-ask spreads, predicted from more accessible data like closing price, volume, high/low price, returns. In HCR methodology we first normalize marginal distributions to nearly uniform like in copula theory. Then we model (joint) densities as linear combinations of orthonormal polynomials, getting its decomposition into (mixed) moments. Then here we model each moment (separately) of predicted variable as a linear combination of mixed moments of known variables using least squares linear regression - getting accurate description with interpretable coefficients describing linear relations between moments. Combining such predicted moments we get predicted density as a polynomial, for which we can e.g. calculate expected value, but also variance to evaluate uncertainty of such prediction, or we can use the entire distribution e.g. for more accurate further calculations or generating random values. There were performed 10-fold cross-validation log-likelihood tests for 22 DAX companies, leading to very accurate predictions, especially when using individual models for each company as there were found large differences between their behaviors. Additional advantage of the discussed methodology is being computationally inexpensive, finding and evaluation a model with hundreds of parameters and thousands of data points takes a second on a laptop.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.02361&r=all
  5. By: Hong, Han (Stanford University); Li, Huiyu (Federal Reserve Bank of San Francisco); Li, Jessie (University of California, Santa Cruz)
    Abstract: We derive the asymptotic distribution of the parameters of the Berry et al. (1995, BLP) model in a many markets setting which takes into account simulation noise under the assumption of overlapping simulation draws. We show that, as long as the number of simulation draws R and the number of markets T approach infinity, our estimator is √m = √min(R,T) consistent and asymptotically normal. We do not impose any relationship between the rates at which R and T go to infinity, thus allowing for the case of R
    JEL: C10 C11 C13 C15
    Date: 2019–09–04
    URL: http://d.repec.org/n?u=RePEc:fip:fedfwp:2019-24&r=all
  6. By: Victor Chernozhukov; Jerry A. Hausman; Whitney K. Newey
    Abstract: From its inception, demand estimation has faced the problem of "many prices." This paper provides estimators of average demand and associated bounds on exact consumer surplus when there are many prices in cross-section or panel data. For cross-section data we provide a debiased machine learner of consumer surplus bounds that allows for general heterogeneity and solves the "zeros problem" of demand. For panel data we provide bias corrected, ridge regularized estimators of average coefficients and consumer surplus bounds. In scanner data we find smaller panel elasticities than cross-section and that soda price increases are regressive.
    JEL: C13 C14 C21 C23 C55 D12
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:26424&r=all
  7. By: Junlong Feng
    Abstract: I consider nuclear norm penalized quantile regression for large $N$ and large $T$ panel data models with interactive fixed effects. The estimator solves a convex minimization problem, not requiring pre-estimation of the (number of the) fixed effects. Uniform rates are obtained for both the regression coefficients and the common component estimators. The rate of the latter is nearly optimal. To derive the rates, I also show new results that establish uniform bounds related to random matrices of jump processes. These results may have independent interest. Finally, I conduct Monte Carlo simulations to illustrate the estimator's finite sample performance.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.00166&r=all
  8. By: Liang Chen
    Abstract: This paper considers panel data models where the conditional quantiles of the dependent variables are additively separable as unknown functions of the regressors and the individual effects. We propose two estimators of the quantile partial effects while controlling for the individual heterogeneity. The first estimator is based on local linear quantile regressions, and the second is based on local linear smoothed quantile regressions, both of which are easy to compute in practice. Within the large T framework, we provide sufficient conditions under which the two estimators are shown to be asymptotically normally distributed. In particular, for the first estimator, it is shown that $N
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.01824&r=all
  9. By: Sanghyun Hong; W. Robert Reed (University of Canterbury)
    Abstract: The study contributes towards the development of a systematic experimental framework for evaluating meta-analysis methods. Towards that goal, we reproduce the Monte Carlo experiments from three studies: Stanley & Doucouliagos (2017); Stanley, Doucouliagos, & Ioannidis (2017); and Alinaghi & Reed (2018) – S&D, SD&I, and A&R, respectively. We demonstrate that the relative performance of estimators depends on whether the researcher is concerned with unbiasedness, mean squared error (MSE), or coverage rates. We also show how estimator performance varies systematically with the number of estimates in the meta-analyst’s sample and the degree of effect heterogeneity as measured by I2. This demonstrates the possibility that researchers can select a “best estimator” based on the observable characteristics of their meta-analysis samples. We further show that the design of simulation experiments makes a difference: Different simulation designs by S&D and SD&I applied to the same “types” of meta-analysis samples select different “best” estimators. Different designs in A&R also produce different results. This highlights the need to know more about which aspects of simulation designs are important for estimator performance. Finally, our results indicate that the recent Andrews & Kasy (2019) estimators perform well in a number of research environments, frequently outperforming the popular PET-PEESE and WAAP estimators, though more research is needed.
    Keywords: Meta-analysis, Estimator performance, Publication bias, Simulation design, WAAP, PET-PEESE, Andrews-Kasy, Monte Carlo, Experiments
    JEL: B41 C15 C18
    Date: 2019–10–01
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:19/13&r=all
  10. By: Changli He (Tianjin University of Finance and Economics); Jian Kang (Tianjin University of Finance and Economics); Timo Teräsvirta (Aarhus University and CREATES); Shuhua Zhang (Tianjin University of Finance and Economics)
    Abstract: We consider a vector version of the Shifting Seasonal Mean Autoregressive model. The model is used for describing dynamic behaviour of and contemporaneous dependence between a number of long monthly temperature series for 20 cities in Europe, extending from the second half of the 18th century until mid-2010s. The results indicate strong warming in the winter months, February excluded, and cooling followed by warming during the summer months. Error variances are mostly constant over time, but for many series there is systematic decrease between 1820 and 1850 in April. Error correlations are considered by selecting two small sets of series and modelling correlations within these sets. Some correlations do change over time, but a large majority remains constant. Not surprisingly, the correlations generally decrease with the distance between cities, but geography also plays a role.
    Keywords: Changing seasonality, nonlinear model, vector smooth transition, autoregression
    JEL: C32 C52 Q54
    Date: 2019–11–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2019-18&r=all
  11. By: McCracken, Michael W. (Federal Reserve Bank of St. Louis); McGillicuddy, Joseph (Federal Reserve Bank of St. Louis); Owyang, Michael T. (Federal Reserve Bank of St. Louis)
    Abstract: While conditional forecasting has become prevalent both in the academic literature and in practice (e.g., bank stress testing, scenario forecasting), its applications typically focus on continuous variables. In this paper, we merge elements from the literature on the construction and implementation of conditional forecasts with the literature on forecasting binary variables. We use the Qual-VAR [Dueker (2005)], whose joint VAR-probit structure allows us to form conditional forecasts of the latent variable which can then be used to form probabilistic forecasts of the binary variable. We apply the model to forecasting recessions in real-time and investigate the role of monetary and oil shocks on the likelihood of two U.S. recessions.
    Keywords: Qual-VAR; recession; monetary policy; oil shocks
    JEL: C22 C52 C53
    Date: 2019–10–01
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2019-029&r=all
  12. By: Yilanci, Veli; Aydin, Mücahit; Aydin, Mehmet
    Abstract: This paper proposes a residual-based unit root test in the presence of smooth structural changes approximated by a Fourier function. While Fourier Augmented Dickey Fuller test that introduced by Enders and Lee (2012a) allows smooth changes of the unknown form, the Residual Augmented Least Squares procedure use additional higher moment information found in non-normal errors. The test offers a simple way to accommodate an unknown number and form of structural breaks and have good size and power properties in the case of non-normal errors.
    Keywords: Non-normal errors, Fourier Function, Unit root.
    JEL: C22 F31
    Date: 2019–11–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:96797&r=all
  13. By: Haotian Zhong; Wei Li; Marlon G. Boarnet
    Abstract: The lack of longitudinal studies of the relationship between the built environment and travel behavior has been widely discussed in the literature. This paper discusses how standard propensity score matching estimators can be extended to enable such studies by pairing observations across two dimensions: longitudinal and cross-sectional. Researchers mimic randomized controlled trials (RCTs) and match observations in both dimensions, to find synthetic control groups that are similar to the treatment group and to match subjects synthetically across before-treatment and after-treatment time periods. We call this a two-dimensional propensity score matching (2DPSM). This method demonstrates superior performance for estimating treatment effects based on Monte Carlo evidence. A near-term opportunity for such matching is identifying the impact of transportation infrastructure on travel behavior.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.00667&r=all
  14. By: Carriero, Andrea (Queen Mary, University of London); Clark, Todd E. (Federal Reserve Bank of Cleveland); Marcellino, Massimiliano (Bocconi University, IGIER, and CEPR)
    Abstract: This paper uses a large vector autoregression to measure international macroeconomic uncertainty and its effects on major economies. We provide evidence of significant commonality in macroeconomic volatility, with one common factor driving strong comovement across economies and variables. We measure uncertainty and its effects with a large model in which the error volatilities feature a factor structure containing time-varying global components and idiosyncratic components. Global uncertainty contemporaneously affects both the levels and volatilities of the included variables. Our new estimates of international macroeconomic uncertainty indicate that surprise increases in uncertainty reduce output and stock prices, adversely affect labor market conditions, and in some economies lead to an easing of monetary policy.
    Keywords: Uncertainty; Endogeneity; Identifcation; Stochastic Volatility; Bayesian Methods;
    JEL: C11 C32 D81 E32
    Date: 2019–09–05
    URL: http://d.repec.org/n?u=RePEc:fip:fedcwq:180301&r=all
  15. By: M. Hashem Pesaran; Ron P. Smith
    Abstract: In this paper we are concerned with the role of factor strength and pricing errors in asset pricing models, and their implications for identification and estimation of risk premia. We establish an explicit relationship between the pricing errors and the presence of weak factors that are correlated with stochastic discount factor. We introduce a measure of factor strength, and distinguish between observed factors and unobserved factors. We show that unobserved factors matter for pricing if they are correlated with the discount factor, and relate the strength of the weak factors to the strength (pervasiveness) of non-zero pricing errors. We then show, that even when the factor loadings are known, the risk premia of a factor can be consistently estimated only if it is strong and if the pricing errors are weak. Similar results hold when factor loadings are estimated, irrespective of whether individual returns or portfolio returns are used. We derive distributional results for two pass estimators of risk premia, allowing for non-zero pricing errors. We show that for inference on risk premia the pricing errors must be sufficiently weak. We consider both when n (the number of securities) is large and T (the number of time periods) is short, and the case of large n and T. Large n is required for consistent estimation of risk premia, whereas the choice of short T is intended to reduce the possibility of time variations in the factor loadings. We provide monthly rolling estimates of the factor strengths for the three Fama-French factors over the period 1989-2018.
    Keywords: arbitrage pricing theory, APT, factor strength, identification of risk premia, two-pass regressions, Fama-French factors
    JEL: C38 G12
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_7919&r=all

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.