|
on Econometrics |
By: | Kanchana Nadarajah; Gael M Martin; Donald S Poskitt |
Abstract: | We use the jackknife to bias correct the log-periodogram regression (LPR) estimator of the fractional parameter in a stationary fractionally integrated model. The weights for the jackknife estimator are chosen in such a way that bias reduction is achieved without the usual increase in asymptotic variance, with the estimator viewed as `optimal' in this sense. The theoretical results are valid under both the non-overlapping and moving-block sub-sampling schemes that can be used in the jackknife technique, and do not require the assumption of Gaussianity for the data generating process. A Monte Carlo study explores the Önite sample performance of different versions of the optimal jackknife estimator under a variety of fractional data generating processes. The simulations reveal that when the weights are constructed using the true parameter values, a version of the optimal jackknife estimator almost always out-performs alternative bias-corrected estimators. A feasible version of the jackknife estimator, in which the weights are constructed using consistent estimators of the unknown parameters, whilst not dominant overall, is still the least biased estimator in some cases. |
Keywords: | Long memory; bias adjustment, cumulants, discrete Fourier transform, periodograms, log-periodogram regression. |
JEL: | C18 C22 C52 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2019-7&r=all |
By: | Yang, Bill Huajian |
Abstract: | Minimum cross-entropy estimation is an extension to the maximum likelihood estimation for multinomial probabilities. Given a probability distribution {r_i }_(i=1)^k, we show in this paper that the monotonic estimates {p_i }_(i=1)^k for the probability distribution by minimum cross-entropy are each given by the simple average of the given distribution values over some consecutive indexes. Results extend to the monotonic estimation for multivariate outcomes by generalized cross-entropy. These estimates are the exact solution for the corresponding constrained optimization and coincide with the monotonic estimates by least squares. A non-parametric algorithm for the exact solution is proposed. The algorithm is compared to the “pool adjacent violators” algorithm in least squares case for the isotonic regression problem. Applications to monotonic estimation of migration matrices and risk scales for multivariate outcomes are discussed. |
Keywords: | maximum likelihood, cross-entropy, least squares, isotonic regression, constrained optimization, multivariate risk scales |
JEL: | C13 C18 C4 C44 C5 C51 C52 C53 C54 C58 C61 C63 |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:93400&r=all |
By: | Raffaello Seri; Samuele Centorrino; Michele Bernasconi |
Abstract: | The goal of this paper is to provide some statistical tools for nonparametric estimation and inference in psychological and economic experiments. We consider a framework in which a quantity of interest depends on some primitives through an unknown function $f$. An estimator of this unknown function can be obtained from a controlled experiment in which $n$ subjects are gathered, and a vector of stimuli is administered to each subject who provides a set of $T$ responses. We propose to estimate $f$ nonparametrically using the method of sieves. We provide conditions for consistency of this estimator when either $n$ or $T$ or both diverge to infinity, and when the answers of each subject are correlated and this correlation differs across subjects. We further demonstrate that the rate of convergence depends upon the covariance structure of the error term taken across individuals. A convergence rate is also obtained for derivatives. These results allow us to derive the optimal divergence rate of the dimension of the sieve basis with both $n$ and $T$ and thus provide guidance about the optimal balance between the number of subjects and the number of questions in a laboratory experiment. We argue that in general a large value of $n$ is better than a large value of $T$. Conditions for asymptotic normality of linear and nonlinear functionals of the estimated function of interest are derived. These results are further applied to obtain the asymptotic distribution of the Wald test when the number of constraints under the null is finite and when it diverges to infinity along with other asymptotic parameters. Lastly, we investigate the properties of the previous test when the conditional covariance matrix is replaced by a consistent estimator. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.11156&r=all |
By: | Hiroyuki Kasahara; Katsumi Shimotsu |
Abstract: | We study identification in nonparametric regression models with a misclassified and endogenous binary regressor when an instrument is correlated with misclassification error. We show that the regression function is nonparametrically identified if one binary instrument variable and one binary covariate that satisfy the following conditions are present. The instrumental variable (IV) corrects endogeneity; the IV must be correlated with the unobserved true underlying binary variable, must be uncorrelated with the error term in the outcome equation, and is allowed to be correlated with the misclassification error. The covariate corrects misclassification; this variable can be one of the regressors in the outcome equation, must be correlated with the unobserved true underlying binary variable, and must be uncorrelated with the misclassification error. We also propose a mixture-based framework for modeling unobserved heterogeneous treatment effects with a misclassified and endogenous binary regressor and show that treatment effects can be identified if the true treatment effect is related to an observed regressor and another observable variable. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.11143&r=all |
By: | Bertille Antoine (Simon Fraser University); Pascal Lavergne (Toulouse School of Economics.) |
Abstract: | For a linear IV regression, we propose two new inference procedures on parameters of endogenous variables that are robust to any identification pattern, do not rely on a linear first-stage equation, and account for heteroskedasticity of unknown form. Building on Bierens (1982), we first propose an Integrated Conditional Moment (ICM) type statistic constructed by setting the parameters to the value under the null hypothesis. The ICM procedure tests at the same time the value of the coefficient and the specification of the model. We then adopt the conditionality principle used by Moreira (2003) to condition on a set of ICM statistics that informs on identification strength. Our two procedures uniformly control size irrespective of identification strength. They are powerful irrespective of the nonlinear form of the link between instruments and endogenous variables and are competitive with existing procedures in simulations and applications. |
Keywords: | Weak Instruments; Hypothesis Testing; Semiparametric Model |
JEL: | C13 C12 |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:sfu:sfudps:dp19-02&r=all |
By: | Arnab Chakrabarti; Rituparna Sen |
Abstract: | Copula is a powerful tool to model multivariate data. Due to its several merits Copula modelling has become one of the most widely used methods to model financial data. We discuss the problem of modelling intraday financial data through Copula. The problem originates due to the nonsynchronous nature of intraday financial data whereas to estimate the Copula, we need synchronous observations. We show that this problem may lead to serious underestimation of the Copula parameter. We propose a modification to obtain a consistent estimator in case of Elliptical Copula or to reduce the bias significantly in case of general copulas. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.10182&r=all |
By: | Yang, Bill Huajian |
Abstract: | Abstract Given a risk outcome y over a rating system {R_i }_(i=1)^k for a portfolio, we show in this paper that the maximum likelihood estimates with monotonic constraints, when y is binary (the Bernoulli likelihood) or takes values in the interval 0≤y≤1 (the quasi-Bernoulli likelihood), are each given by the average of the observed outcomes for some consecutive rating indexes. These estimates are in average equal to the sample average risk over the portfolio and coincide with the estimates by least squares with the same monotonic constraints. These results are the exact solution of the corresponding constrained optimization. A non-parametric algorithm for the exact solution is proposed. For the least squares estimates, this algorithm is compared with “pool adjacent violators” algorithm for isotonic regression. The proposed approaches provide a resolution to flip-over credit risk and a tool to determine the fair risk scales over a rating system. |
Keywords: | risk scale, maximum likelihood, least squares, isotonic regression, flip-over credit risk |
JEL: | C10 C13 C14 C18 C6 C61 C63 C65 C67 C8 C80 G12 G17 G18 G3 G32 G35 |
Date: | 2019–03–18 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:93389&r=all |
By: | Junyao Chen; Tony Sit; Hoi Ying Wong |
Abstract: | Value-at-risk (VaR) has been playing the role of a standard risk measure since its introduction. In practice, the delta-normal approach is usually adopted to approximate the VaR of portfolios with option positions. Its effectiveness, however, substantially diminishes when the portfolios concerned involve a high dimension of derivative positions with nonlinear payoffs; lack of closed form pricing solution for these potentially highly correlated, American-style derivatives further complicates the problem. This paper proposes a generic simulation-based algorithm for VaR estimation that can be easily applied to any existing procedures. Our proposal leverages cross-sectional information and applies variable selection techniques to simplify the existing simulation framework. Asymptotic properties of the new approach demonstrate faster convergence due to the additional model selection component introduced. We have also performed sets of numerical results that verify the effectiveness of our approach in comparison with some existing strategies. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.09088&r=all |
By: | Beyhum, Jad; Gautier, Eric |
Abstract: | This paper considers a nuclear norm penalized estimator for panel data models with interactive effects. The low-rank interactive effects can be an approximate model and the rank of the best approximation unknown and grow with sample size. The estimator is solution of a well-structured convex optimization problem and can be solved in polynomial-time. We derive rates of convergence, study the low-rank properties of the estimator, estimation of the rank and of annihilator matrices when the number of time periods grows with the sample size. Two-stage estimators can be asymptotically normal. None of the procedures require knowledge of the variance of the errors. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:122931&r=all |
By: | Bonino-Gayoso, Nicolás; García-Hiernaux, Alfredo |
Abstract: | This paper tackles the mixed-frequency modeling problem from a new perspective. Instead of drawing upon the common distributed lag polynomial model, we use a transfer function representation to develop a new type of models, named TF-MIDAS. We derive the theoretical TF-MIDAS implied by the high-frequency VARMA family models and as a function of the aggregation scheme (flow and stock). This exact correspondence leads to potential gains in terms of nowcasting and forecasting performance against the current alternatives. A Monte Carlo simulation exercise confirms that TF-MIDAS beats UMIDAS models in terms of out-of-sample nowcasting performance for several data generating high-frequency processes. |
Keywords: | Mixed-Frequency models, TF-MIDAS, U-MIDAS, Nowcasting, Forecasting |
JEL: | C18 C51 C53 |
Date: | 2019–03–30 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:93366&r=all |
By: | Mnasri, Ayman; Nechi, Salem |
Abstract: | This paper proposes new estimation techniques for gravity models with zero trade values and heteroscedasticity. We revisit the standard PPML estimator and we propose an improved version. We also propose various Heckman estimators with different distributions of the residuals, nonlinear forms of both selection and measure equations, and various process of the variance. We add to the existent literature alternative estimation methods taking into account the non-linearity of both the variance and the selection equation. Moreover, because of the unavailability of pre-set package in the econometrics software (Stata, Eviews, Matlab, etc.) to perform the estimation of the above-mentioned Heckman versions, we had to code it in Matlab using a combination of fminsearch and fminunc functions. Using numerical gradient matrix G, we report standard errors based on the BHHH technique. The proposed new Heckman version could be used in other applications. Our results suggest that previous empirical studies might be overestimating the contribution of the GDP of both import and export countries in determining the bilateral trade. |
Keywords: | Gravity model, Heteroscedasticity; Zero Trade values; New Heckman; New PPML |
JEL: | C01 C10 C13 C15 C60 F10 F14 |
Date: | 2019–04–06 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:93426&r=all |
By: | Badi Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Qu Feng (chool of Social Sciences, Nanyang Technological University, Singapore); Chihwa Kao (Department of Economics, University of Connecticut) |
Abstract: | This paper extends Pesaran (2006) common correlated e¤ects (CCE) by allowing for endogenous regressors in large heterogeneous panels with unknown common structural changes in slopes and error factor structure. Since endogenous regressors and structural breaks are often encountered in empirical studies with large panels, this extension makes the Pesaran’s (2006) CCE approach empirically more appealing. In addition to allowing for slope heterogeneity and cross-sectional dependence, we find that Pesaran’s CCE approach is also valid when dealing with unobservable factors in the presence of endogenous regressors and structural changes in slopes and error factor loadings. This is supported by Monte Carlo experiments. |
Keywords: | Structural Changes, Heterogeneous Panels, Common Correlated Effects, Endogeneity |
JEL: | C23 C33 |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:max:cprwps:214&r=all |
By: | Rogelio A. Mancisidor; Michael Kampffmeyer; Kjersti Aas; Robert Jenssen |
Abstract: | Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semi-supervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan (default or non-default). The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.11376&r=all |
By: | Matias D. Cattaneo; Michael Jansson |
Abstract: | This paper highlights a tension between semiparametric efficiency and bootstrap consistency in the context of a canonical semiparametric estimation problem. It is shown that although simple plug-in estimators suffer from bias problems preventing them from achieving semiparametric efficiency under minimal smoothness conditions, the nonparametric bootstrap automatically corrects for this bias and that, as a result, these seemingly inferior estimators achieve bootstrap consistency under minimal smoothness conditions. In contrast, "debiased" estimators that achieve semiparametric efficiency under minimal smoothness conditions do not achieve bootstrap consistency under those same conditions. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.09372&r=all |
By: | Michael P. Leung; Hyungsik Roger Moon |
Abstract: | We prove central limit theorems for models of network formation and network processes with homophilous agents. The results hold under large-network asymptotics, enabling inference in the typical setting where the sample consists of a small set of large networks. We first establish a general central limit theorem under high-level `stabilization' conditions that provide a useful formulation of weak dependence, particularly in models with strategic interactions. The result delivers a square root n rate of convergence and a closed-form expression for the asymptotic variance. Then using techniques in branching process theory, we derive primitive conditions for stabilization in the following applications: static and dynamic models of strategic network formation, network regressions, and treatment effects with network spillovers. Finally, we suggest some practical methods for inference. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.11060&r=all |
By: | Ali Habibnia (Virginia Tech); Esfandiar Maasoumi (Emory University) |
Abstract: | This paper considers improved forecasting in possibly nonlinear dynamic settings, with high-dimension predictors ("big data" environments). To overcome the curse of dimensionality and manage data and model complexity, we examine shrinkage estimation of a back-propagation algorithm of a deep neural net with skip-layer connections. We expressly include both linear and nonlinear components. This is a high-dimensional learning approach including both sparsity L1 and smoothness L2 penalties, allowing high-dimensionality and nonlinearity to be accommodated in one step. This approach selects significant predictors as well as the topology of the neural network. We estimate optimal values of shrinkage hyperparameters by incorporating a gradient-based optimization technique resulting in robust predictions with improved reproducibility. The latter has been an issue in some approaches. This is statistically interpretable and unravels some network structure, commonly left to a black box. An additional advantage is that the nonlinear part tends to get pruned if the underlying process is linear. In an application to forecasting equity returns, the proposed approach captures nonlinear dynamics between equities to enhance forecast performance. It offers an appreciable improvement over current univariate and multivariate models by RMSE and actual portfolio performance. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.11145&r=all |
By: | Yang, Bill Huajian |
Abstract: | Monotonic estimation for the survival probability of a loan in a risk-rated portfolio is based on the observation arising, for example, from loan pricing that a loan with a lower credit risk rating is more likely to survive than a loan with a higher credit risk rating, given the same additional risk covariates. Two probit-type discrete-time hazard rate models that generate monotonic survival probabilities are proposed in this paper. The first model calculates the discrete-time hazard rate conditional on systematic risk factors. As for the Cox proportion hazard rate model, the model formulates the discrete-time hazard rate by including a baseline component. This baseline component can be estimated outside the model in the absence of model covariates using the long-run average discrete-time hazard rate. This results in a significant reduction in the number of parameters to be otherwise estimated inside the model. The second model is a general form model where loan level factors can be included. Parameter estimation algorithms are also proposed. The models and algorithms proposed in this paper can be used for loan pricing, stress testing, expected credit loss estimation, and modeling of the probability of default term structure. |
Keywords: | loan pricing, survival probability, Cox proportion hazard rate model, baseline hazard rate, forward probability of default, probability of default term structure |
JEL: | C02 C13 C18 C40 C44 C51 C52 C53 C58 C61 C63 |
Date: | 2019–03–18 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:93398&r=all |
By: | Nicolás Salamanca (Melbourne Institute: Applied Economic & Social Research, The University of Melbourne) |
Abstract: | The time-stability of preferences is a crucial and ubiquitous assumption in economics, yet to date there is no method to test its validity. Based on a model of the dynamics of individual preferences, I develop a simple method to test this assumption. Time-persistance in preferences is captured via an autoregressive parameter that accounts for observable characteristics and is unattenuated by measurement error, which forms the basis of the test. The method also estimates the variance of persistent shocks to latent preferences, which measures unobserved heterogeneity, and preference measurement error. I illustrate the use of this method by testing the stability of risk aversion and patience using micro-level data, and find that patience is time-stable but risk aversion is not. However, change very slowly over time. This method provides researchers with a simple tool to properly test the assumption on preference stability, and to measure the degree of preference changes due to observable and unobservable factors. |
Keywords: | stability of preferences, risk aversion, patience, shock persistence, measurement error |
JEL: | D01 D03 C18 |
Date: | 2018–04 |
URL: | http://d.repec.org/n?u=RePEc:iae:iaewps:wp2018n04&r=all |
By: | Yusuke Narita (Massachusetts Institute of Technology) |
Abstract: | Randomized Controlled Trials (RCTs) enroll hundreds of millions of subjects and involve many human lives. To improve subjects’ welfare, I propose a design of RCTs that I call Experiment-as-Market (EXAM). EXAM produces a Pareto efficient allocation of treatment assignment probabilities, is asymptotically incentive compatible for preference elicitation, and unbiasedly estimates any causal effect estimable with standard RCTs. I quantify these properties by applying EXAM to a water cleaning experiment in Kenya (Kremer et al., 2011). In this empirical setting, compared to standard RCTs, EXAM substantially improves subjects’ predicted well-being while reaching similar treatment effect estimates with similar precision. |
Keywords: | clinical trial, social experiments, A/B test, market design, competitive equilibrium from equal income, Pareto efficiency, causal inference, development economics |
JEL: | C93 O15 |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:hka:wpaper:2019-025&r=all |