|
on Econometrics |
By: | Sainan Jin (Singapore Management University); Valentina Corradi (University of Surrey); Norman Swanson (Rutgers University) |
Abstract: | Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence of the use of misspecified models in multiple model comparisons, relative forecast rankings are loss function dependent. This paper addresses this issue by using a novel criterion for forecast evaluation which is based on the entire distribution of forecast errors. We introduce the concepts of general-loss (GL) forecast superiority and convex-loss (CL) forecast superiority, and we establish a mapping between GL (CL) superiority and first (second) order stochastic dominance. This allows us to develop a forecast evaluation procedure based on an out-of-sample generalization of the tests introduced by Linton, Maasoumi and Whang (2005). The asymptotic null distributions of our test statistics are nonstandard, and resampling procedures are used to obtain the critical values. Additionally, the tests are consistent and have nontrivial local power under a sequence of local alternatives. In addition to the stationary case, we outline theory extending our tests to the case of heterogeneity induced by distributional change over time. Monte Carlo simulations suggest that the tests perform reasonably well in finite samples; and an application to exchange rate data indicates that our tests can help identify superior forecasting models, regardless of loss function. |
Keywords: | convex loss function, empirical processes, forecast superiority, general loss function |
JEL: | C12 C22 |
Date: | 2015–05–13 |
URL: | http://d.repec.org/n?u=RePEc:rut:rutres:201502&r=ecm |
By: | Jin Seo Cho (School of Economics, Yonsei University); Myung-Ho Park (Korea Institute of Public Finance); Peter C. B. Phillips (Cowles Foundation, Yale University) |
Abstract: | We study Kolmogorov-Smirnov goodness of fit tests for evaluating distributional hypotheses where unknown parameters need to be fitted. Following work of Pollard (1979), our approach uses a Cramér-von Mises minimum distance estimator for parameter estimation. The asymptotic null distribution of the resulting test statistic is represented by invariance principle arguments as a functional of a Brownian bridge in a simple regression format for which asymptotic critical values are readily delivered by simulations. Asymptotic power is examined under fixed and local alternatives and finite sample performance of the test is evaluated in simulations. The test is applied to measure top income shares using Korean income tax return data over 2007 to 2012. When the data relate to the upper 0.1% or higher tail of the income distribution, the conventional assumption of a Pareto tail distribution cannot be rejected. But the Pareto tail hypothesis is rejected for the top 1.0% or 0.5% incomes at the 5% significance level. |
Keywords: | Brownian bridge, Cramér-von Mises statistic, Distribution-free asymptotics, Null distribution, Minimum distance estimator, Empirical distribution, goodness-of-fit test, Crámer-von Mises distance, Top income shares, Pareto interpolation |
JEL: | C12 C13 D31 E01 O15 |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2007&r=ecm |
By: | Lok, Thomas M.; Tabri, Rami V. |
Abstract: | This paper proposes a uniformly asymptotically valid method of testing for restricted stochastic dominance based on the bootstrap test of Linton et al. (2010). The method reformulates their bootstrap test statistics using a constrained estimator of the contact set that imposes the restrictions of the null hypothesis. As our simulation results show, this characteristic of our test makes it noticeably less conservative than the test of Linton et al. (2010) and improves its power against alternatives that have some non-violated inequalities. |
Keywords: | Empirical Likelihood, Constrained Estimation, Restricted Stochastic Dominance, Bootstrap Test |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:syd:wpaper:2015-15.&r=ecm |
By: | Nam-Hyun Kim (Department of Economics, University of Konstanz, Germany); Winfried Pohlmeier (Department of Economics, University of Konstanz, Germany; The Rimini Centre for Economic Analysis, Italy) |
Abstract: | We propose to apply –norm regularization to address the problem of weak and/or many instruments. We observe that the presence of weak instruments, or weak and many instruments is translated into a nearly singular problem in a control function representation. Hence, we show that mean squares error-optimal -norm regularization with a small sample size reduces the bias and variance of the regularized 2SLS estimators with the presence of weak and/or many instruments. A number of different strategies for choosing a regularization parameter are introduced and compared in a Monte Carlo study. |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:15-22&r=ecm |
By: | Marcel Wollschl\"ager; Rudi Sch\"afer |
Abstract: | All too often measuring statistical dependencies between financial time series is reduced to a linear correlation coefficient. However this may not capture all facets of reality. We study empirical dependencies of daily stock returns by their pairwise copulas. Here we investigate particularly to which extent the non-stationarity of financial time series affects both the estimation and the modeling of empirical copulas. We estimate empirical copulas from the non-stationary, original return time series and stationary, locally normalized ones. Thereby we are able to explore the empirical dependence structure on two different scales: a global and a local one. Additionally the asymmetry of the empirical copulas is emphasized as a fundamental characteristic. We compare our empirical findings with a single Gaussian copula, with a correlation-weighted average of Gaussian copulas, with the K-copula directly addressing the non-stationarity of dependencies as a model parameter, and with the skewed Student's t-copula. The K-copula covers the empirical dependence structure on the local scale most adequately, whereas the skewed Student's t-copula best captures the asymmetry of the empirical copula on the global scale. |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1506.08054&r=ecm |
By: | Bensalma, Ahmed |
Abstract: | The aim of this paper is motivated by the following question: “If a series were best characterized by fractional process, would a researcher be able to detect that fact by using conventional Dickey-Fuller (1979) test?” To answer this question, in simple framework, we propose a new fractional Dickey-Fuller (F-DF) test, different from the test of Dolado, Gonzalo and Mayoral (2002). |
Keywords: | Fractional unit root, Dickey-Fuller Test, Fractional integration parameter. |
JEL: | C1 C22 C4 C51 C58 E2 E5 |
Date: | 2015–05–27 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:65282&r=ecm |
By: | James A. Duffy (Institute for New Economic Thinking, Oxford Martin School, and Economics Department, University of Oxford) |
Abstract: | This paper presents uniform convergence rates for kernel regression estimators, in the setting of a structural nonlinear cointegrating regression model. We generalise the existing literature in three ways. First, the domain to which these rates apply is much wider than has been previously considered, and can be chosen so as to contain as large a fraction of the sample as desired in the limit. Second, our results allow the regression disturbance to be serially correlated, and cross-correlated with the regressor; previous work on this problem (of obtaining uniform rates) having been confined entirely to the setting of an exogenous regressor. Third, we permit the bandwidth to be data-dependent, requiring only that it satisfy certain weak asymptotic shrinkage conditions. Our assumptions on the regressor process are consistent with a very broad range of departures from the standard unit root autoregressive model, allowing the regressor to be fractionally integrated, and to have an infinite variance (and even infinite lower-order moments). |
Date: | 2015–05–05 |
URL: | http://d.repec.org/n?u=RePEc:nuf:econwp:1503&r=ecm |
By: | Jens Stange (Weierstrass Institute for Applied Analysis and Stochastics, Berlin, Germany); Thorsten Dickhaus (University of Bremen, Germany); Arcadi Navarro (Universitat Pompeu Fabra and Institucio Catalana de Recerca i Estudis Avancats (ICREA) and Center for Genomic Regulation (CRG), Barcelona, Spain); Daniel Schunk (Department of Economics, Johannes Gutenberg-Universitaet Mainz, Germany) |
Abstract: | We are concerned with the problem of testing multiple hypotheses simultaneously based on the same data and controlling the family-wise error rate. The multiplicity- and dependency-adjustment method (MADAM) is proposed which transforms test statistics into multiplicity- and dependency adjusted p-values. The MADAM is closely connected with the concept of the "effective number of tests", but avoids certain inconveniences of the latter. For demonstration, we apply the MADAM to data from a genetic association study by exploiting computational methods for evaluating multivariate chi-square distribution functions. |
Keywords: | Bonferroni correction, dependency structure, eective number of tests, genetic epidemiology, multiple testing, probability approximations, Sidak correction |
Date: | 2015–06–29 |
URL: | http://d.repec.org/n?u=RePEc:jgu:wpaper:1505&r=ecm |
By: | Guillaume Carlier (CEntre de REcherches en MAthématiques de la DEcision); Victor Chernozhukov (MIT Department of Economics); Alfred Galichon (Département d'économie) |
Abstract: | We propose a notion of conditional vector quantile function and a vector quantile regression. A conditional vector quantile function (CVQF) of a random vector Y, taking values in ℝd given covariates Z=z, taking values in ℝk, is a map u↦QY∣Z(u,z), which is monotone, in the sense of being a gradient of a convex function, and such that given that vector U follows a reference non-atomic distribution FU, for instance uniform distribution on a unit cube in ℝd, the random vector QY∣Z(U,z) has the distribution of Y conditional on Z=z. Moreover, we have a strong representation, Y=QY∣Z(U,Z) almost surely, for some version of U. The vector quantile regression (VQR) is a linear model for CVQF of Y given Z. Under correct specification, the notion produces strong representation, Y=β(U)⊤f(Z), for f(Z) denoting a known set of transformations of Z, where u↦β(u)⊤f(Z) is a monotone map, the gradient of a convex function, and the quantile regression coefficients u↦β(u) have the interpretations analogous to that of the standard scalar quantile regression. As f(Z) becomes a richer class of transformations of Z, the model becomes nonparametric, as in series modelling. A key property of VQR is the embedding of the classical Monge-Kantorovich's optimal transportation problem at its core as a special case. In the classical case, where Y is scalar, VQR reduces to a version of the classical QR, and CVQF reduces to the scalar conditional quantile function. Several applications to diverse problems such as multiple Engel curve estimation, and measurement of financial risk, are considered. |
Keywords: | Vector Quantile Regression; Vector Conditional Quantile Function; Monge-Kantorovich; Brenier |
JEL: | C14 C21 |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/6rign1j2jd9c69im80po26g4nt&r=ecm |
By: | Jaroslaw Kwapien; Pawel Oswiecimka; Stanislaw Drozdz |
Abstract: | The detrended cross-correlation coefficient $\rho_{\rm DCCA}$ has recently been proposed to quantify the strength of cross-correlations on different temporal scales in bivariate, non-stationary time series. It is based on the detrended cross-correlation and detrended fluctuation analyses (DCCA and DFA, respectively) and can be viewed as an analogue of the Pearson coefficient in the case of the fluctuation analysis. The coefficient $\rho_{\rm DCCA}$ works well in many practical situations but by construction its applicability is limited to detection of whether two signals are generally cross-correlated, without possibility to obtain information on the amplitude of fluctuations that are responsible for those cross-correlations. In order to introduce some related flexibility, here we propose an extension of $\rho_{\rm DCCA}$ that exploits the multifractal versions of DFA and DCCA: MFDFA and MFCCA, respectively. The resulting new coefficient $\rho_q$ not only is able to quantify the strength of correlations, but also it allows one to identify the range of detrended fluctuation amplitudes that are correlated in two signals under study. We show how the coefficient $\rho_q$ works in practical situations by applying it to stochastic time series representing processes with long memory: autoregressive and multiplicative ones. Such processes are often used to model signals recorded from complex systems and complex physical phenomena like turbulence, so we are convinced that this new measure can successfully be applied in time series analysis. In particular, we present an example of such application to highly complex empirical data from financial markets. The present formulation can straightforwardly be extended to multivariate data in terms of the $q$-dependent counterpart of the correlation matrices and then to the network representation. |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1506.08692&r=ecm |
By: | Grote, Claudia; Bertram, Philip |
Abstract: | In this paper we evaluate the performance of several structural break tests under various DGPs. Concretely we look at size and power properties of CUSUM based, LM and Wald volatility break tests. In a simulation study we derive the properties of the tests under shifts in the unconditional and conditional variance as well as for smooth shifts in the volatility process. Our results indicate that Wald tests have more power of detecting a change in the volatility than CUSUM and LM tests. This, however, goes along with the disadvantage of being slightly oversized. We further show that with huge outliers in the data the tests may exhibit non-monotonic power functions as the long-run variance of the squared return process is no longer finite. In an empirical example we determine the number and time of volatility breaks considering four equity and three exchange rate series. We find that in some situations the outcomes of the tests may vary substantially. Further we find fewer volatility breaks in the currency series than in the equity series. |
Keywords: | Structural Breaks, Variance Shifts, Non-Monotonic Power |
JEL: | C22 C52 C53 |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:han:dpaper:dp-558&r=ecm |
By: | Khai Chiong; Alfred Galichon (Département d'économie); Matt Shum |
Abstract: | Using results from convex analysis, we characterize the identification and estimation of dynamic discrete-choice models based on the random utility framework. We show that the conditional choice probabilities and the choice specific payoffs in these models are related in the sense of conjugate duality. Based on this, we propose a new two-step estimator for these models; interestingly, the first step of our estimator involves solving a linear program which is identical to the classic assignment (two-sided matching) game of Shapley and Shubik (1971). The application of convex-analytic tools to dynamic discrete choice models, and the connection with two-sided matching models, is new in the literature. |
Keywords: | Discret choice model; Mass Transport Approach (MTA); Conjugate duality |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/7svo6civd6959qvmn4965cth1d&r=ecm |
By: | Chia-Lin Chang (Department of Applied Economics, Department of Finance, National Chung Hsing University, Taiwan); Yiying Li (Department of Quantitative Finance National Tsing Hua University, Taiwan); Michael McAleer (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute, The Netherlands, Department of Quantitative Economics, Complutense University of Madrid, and Institute of Economic Research, Kyoto University.) |
Abstract: | Energy and agricultural commodities and markets have been examined extensively, albeit separately, for a number of years. In the energy literature, the returns, volatility and volatility spillovers (namely, the delayed effect of a returns shock in one asset on the subsequent volatility or covolatility in another asset), among alternative energy commodities, such as oil, gasoline and ethanol across different markets, have been analysed using a variety of univariate and multivariate models, estimation techniques, data sets, and time frequencies. A similar comment applies to the separate theoretical and empirical analysis of a wide range of agricultural commodities and markets. Given the recent interest and emphasis in bio-fuels and green energy, especially bio-ethanol, which is derived from a range of agricultural products, it is not surprising that there is a topical and developing literature on the spillovers between energy and agricultural markets. Modelling and testing spillovers between the energy and agricultural markets has typically been based on estimating multivariate conditional volatility models, specifically the BEKK and DCC models. A serious technical deficiency is that the Quasi-Maximum Likelihood Estimates (QMLE) of a full BEKK matrix, which is typically estimated in examining volatility spillover effects, has no asymptotic properties, except by assumption, so that no statistical test of volatility spillovers is possible. Some papers in the literature have used the DCC model to test for volatility spillovers. However, it is well known in the financial econometrics literature that the DCC model has no regularity conditions, and that the QMLE of the parameters of DCC has no asymptotic properties, so that there is no valid statistical testing of volatility spillovers. The purpose of the paper is to evaluate the theory and practice in testing for volatility spillovers between energy and agricultural markets using the multivariate BEKK and DCC models, and to make recommendations as to how such spillovers might be tested using valid statistical techniques. Three new definitions of volatility and covolatility spillovers are given, and the different models used in empirical applications are evaluated in terms of the new definitions and statistical criteria. |
Keywords: | Energy markets, Agricultural markets, Volatility and covolatility spillovers, Univariate and multivariate conditional volatility models, BEKK, DCC, Definitions of spillovers. |
JEL: | C22 C32 C58 G32 O13 Q42 |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:ucm:doicae:1508&r=ecm |
By: | Peter C. B. Phillips (Cowles Foundation, Yale University); Sainan Jin (Singapore Management University) |
Abstract: | We analyze trend elimination methods and business cycle estimation by data filtering of the type introduced by Whittaker (1923) and popularized in economics in a particular form by Hodrick and Prescott (1980/1997; HP). A limit theory is developed for the HP filter for various classes of stochastic trend, trend break, and trend stationary data. Properties of the filtered series are shown to depend closely on the choice of the smoothing parameter (lambda). For instance, when lambda = O(n^4) where n is the sample size, and the HP filter is applied to an I(1) process, the filter does not remove the stochastic trend in the limit as n approaches infinity. Instead, the filter produces a smoothed Gaussian limit process that is differentiable to the 4'th order. The residual 'cyclical' process has the random wandering non-differentiable characteristics of Brownian motion, thereby explaining the frequently observed 'spurious cycle' effect of the HP filter. On the other hand, when lambda = o(n), the filter reproduces the limit Brownian motion and eliminates the stochastic trend giving a zero 'cyclical' process. Simulations reveal that the lambda = O(n^4) limit theory provides a good approximation to the actual HP filter for sample sizes common in practical work. When it is used as a trend removal device, the HP filter therefore typically fails to eliminate stochastic trends, contrary to what is now standard belief in applied macroeconomics. The findings are related to recent public debates about the long run effects of the global financial crisis. |
Keywords: | Detrending, Graduation, Hodrick Prescott filter, Integrated process, Limit theory, Smoothing, Trend break, Whittaker filter |
JEL: | C32 |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2005&r=ecm |
By: | Bessec, Marie |
Abstract: | This paper introduces a Markov-Switching model where transition probabilities depend on higher frequency indicators and their lags, through polynomial weighting schemes. The MSV-MIDAS model is estimated via maximum likel ihood methods. The estimation relies on a slightly modified version of Hamilton’s recursive filter. We use Monte Carlo simulations to assess the robustness of the estimation procedure and related test-statistics. The results show that ML provides accurate estimates, but they suggest some caution in the tests on the parameters involved in the transition probabilities. We apply this new model to the detection and forecast of business cycle turning points. We properly detect recessions in United States and United Kingdom by exploiting the link between GDP growth and higher frequency variables from financial and energy markets. Spread term is a particularly useful indicator to predict recessions in the United States, while stock returns have the strongest explanatory power around British turning points. |
Keywords: | Markov-Switching; mixed frequency data; business cycles; |
JEL: | C22 E32 E37 |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:dau:papers:123456789/15246&r=ecm |
By: | Hamidi, Benjamin; Hurlin, Christophe; Kouontchou, Patrick; Maillet, Bertrand |
Abstract: | This paper introduces a new class of models for the Value-at-Risk (VaR) and Expected Shortfall (ES), called the Dynamic AutoRegressive Expectiles (DARE) models. Our approach is based on a weighted average of expectile-based VaR and ES models, i.e. the Conditional Autoregressive Expectile (CARE) models introduced by Taylor (2008a) and Kuan et al. (2009). First, we briefly present the main non-parametric, parametric and semi-parametric estimation methods for VaR and ES. Secondly, we detail the DARE approach and show how the expectiles can be used to estimate quantile risk measures. Thirdly, we use various backtesting tests to compare the DARE approach to other traditional methods for computing VaR forecasts on the French stock market. Finally, we evaluate the impact of several conditional weighting functions and determine the optimal weights in order to dynamically select the more relevant global quantile model. |
Keywords: | Expected Shortfall; Value-at-Risk; Expectile; Risk Measures; Backtests; |
JEL: | C14 C15 C50 C61 G11 |
Date: | 2015 |
URL: | http://d.repec.org/n?u=RePEc:dau:papers:123456789/15232&r=ecm |
By: | Ian Crawford; Matthew Polisson |
Abstract: | In empirical demand, industrial organization, and labor economics, prices are often unobserved or unobservable since they may only be recorded when an agent transacts. In the absence of any additional information, this partial observability of prices is known to lead to a number of identification problems. However, in this paper, we show that theory-consistent demand analysis remains feasible in the presence of partially observed prices, and hence partially observed implied budget sets, even if we are agnostic about the nature of the missing prices. Our revealed preference approach is empirically meaningful and easy to implement. We illustrate using simple examples. |
Keywords: | Demand; missing prices; partial identification; revealed preference |
JEL: | D11 D12 |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:lec:leecon:15/12&r=ecm |
By: | Violetta Dalla (National and Kapodistrian University of Athens); Liudas Giraitis (Queen Mary, London University); Peter C. B. Phillips (Cowles Foundation, Yale University) |
Abstract: | Time series models are often fitted to the data without preliminary checks for stability of the mean and variance, conditions that may not hold in much economic and financial data, particularly over long periods. Ignoring such shifts may result in fitting models with spurious dynamics that lead to unsupported and controversial conclusions about time dependence, causality, and the effects of unanticipated shocks. In spite of what may seem as obvious differences between a time series of independent variates with changing variance and a stationary conditionally heteroskedastic (GARCH) process, such processes may be hard to distinguish in applied work using basic time series diagnostic tools. We develop and study some practical and easily implemented statistical procedures to test the mean and variance stability of uncorrelated and serially dependent time series. Application of the new methods to analyze the volatility properties of stock market returns leads to some unexpected surprising findings concerning the advantages of modeling time varying changes in unconditional variance. |
Keywords: | Heteroskedasticity, KPSS test, Mean stability, Variance stability, VS test |
JEL: | C22 C23 |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2006&r=ecm |
By: | Victor Chernozhukov (MIT Department of Economics); Alfred Galichon (Département d'économie); Marc Henry (Départment de sciences économiques); Brendan, Department Of Mathematics Pass (Department of Mathematics and Statistical Sciences) |
Abstract: | This paper derives conditions under which preferences and technology are nonparametrically identified in hedonic equilibrium models, where products are differentiated along more than one dimension and agents are characterized by several dimensions of unobserved heterogeneity. With products differentiated along a quality index and agents characterized by scalar unobserved heterogeneity, single crossing conditions on preferences and technology provide identifying restrictions. We develop similar shape restrictions in the multi-attribute case and we provide identification results from the observation of a single market. We thereby extend identification results in Matzkin (2003) and Heckman, Matzkin, and Nesheim (2010) to accommodate multiple dimensions of unobserved heterogeneity. |
Keywords: | Hedonic Equilibrium; Nonparametric Identification; Multidimensional Unobserved Heterogeneity; Optimal Transport |
JEL: | C14 C61 C78 |
Date: | 2014–12 |
URL: | http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/4kovgv3hs883bok2tvdkibejb6&r=ecm |
By: | Peter C. B. Phillips (Cowles Foundation, Yale University) |
Abstract: | Financial theory and econometric methodology both struggle in formulating models that are logically sound in reconciling short run martingale behaviour for financial assets with predictable long run behavior, leaving much of the research to be empirically driven. The present paper overviews recent contributions to this subject, focussing on the main pitfalls in conducting predictive regression and on some of the possibilities offered by modern econometric methods. The latter options include indirect inference and techniques of endogenous instrumentation that use convenient temporal transforms of persistent regressors. Some additional suggestions are made for bias elimination, quantile crossing amelioration, and control of predictive model misspecification. |
Keywords: | Bias, Endogenous instrumentation, Indirect inference, IVX estimation, Local unit roots, Mild integration, Prediction, Quantile crossing, Unit roots, Zero coverage probability |
JEL: | C22 C23 |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2003&r=ecm |
By: | Peter C. B. Phillips (Cowles Foundation, Yale University) |
Abstract: | This paper provides a tribute to Edmond Malinvaud's contributions to econometrics. We overview the primary original contributions in Edmond Malinvaud's masterful work The Statistical Methods of Econometrics. This advanced text developed a complete treatment of linear estimation theory using geometric methods and, for the first time, provided rigorous nonlinear regression asymptotics, using this theory as the basis of a rigorous development of the limit theory for simultaneous equations theory. Malinvaud's treatise remained the most complete textbook study of econometric methods for several decades. |
Keywords: | Edmond Malinvaud, Statistical Methods of Econometrics, Linear Estimation, Nonlinear regression |
JEL: | A19 C10 |
Date: | 2015–06 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2002&r=ecm |