
on Econometrics 
By:  Monica Billio (Department of Economics, Cà Foscari University of Venice); Roberto Casarin; Sylvia Kaufmann; Matteo Iacopini 
Abstract:  Multidimensional arrays (i.e. tensors) of data are becoming increasingly available and call for suitable econometric tools. We propose a new dynamic linear regression model for tensorvalued response variables and covariates that encompasses some wellknown multivariate models such as SUR, VAR, VECM, panel VAR and matrix regression models as special cases. For dealing with the overparametrization and overfitting issues due to the curse of dimensionality, we exploit a suitable parametrization based on the parallel factor (PARAFAC) decomposition which enables to achieve both parameter parsimony and to incorporate sparsity effects. Our contribution is twofold: first, we provide an extension of multivariate econometric models to account for both tensorvariate response and covariates; second, we show the effectiveness of proposed methodology in defining an autoregressive process for timevarying real economic networks. Inference is carried out in the Bayesian framework combined with Monte Carlo Markov Chain (MCMC). We show the efficiency of the MCMC procedure on simulated datasets, with different size of the response and independent variables, proving computational efficiency even with highdimensions of the parameter space. Finally, we apply the model for studying the temporal evolution of real economic networks. 
Keywords:  Tensor calculus, tensor decomposition, Bayesian statistics, hierarchical prior, networks, autoregessive model, time series, international trade. 
JEL:  C13 C33 C51 C53 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:ven:wpaper:2018:13&r=ecm 
By:  Bruce E. Hansen (Department of Economics, University of WisconsinMadison); Seojeong Jay Lee (School of Economics, UNSW Business School, UNSW Sydney) 
Abstract:  This paper develops a new distribution theory and inference methods for overidentified Generalized Method of Moments (GMM) estimation focusing on the iterated GMM estimator, allowing for moment misspecification, and for clustered dependence with heterogeneous and growing cluster sizes. This paper is the first to provide a rigorous theory for the iterated GMM estimator. We provide conditions for its existence by demonstrating that the iteration sequence is a contraction mapping. Our asymptotic theory allows the moments to be possibly misspecified, which is a general feature of approximate overidentified models. This form of moment misspecification causes bias in conventional standard error estimation. Our results show how to correct for this standard error bias. Our paper is also the first to provide a rigorous distribution theory for the GMM estimator under cluster dependence. Our distribution theory is asymptotic, and allows for heterogeneous and growing cluster sizes. Our results cover standard smooth moment condition models, including dynamic panels, which is a common application for GMM with cluster dependence. Our simulation results show that conventional heteroskedasticityrobust standard errors are highly biased under moment misspecification, severely understating estimation uncertainty, and resulting in severely oversized hypothesis tests. In contrast, our misspecificationrobust standard errors are approximately unbiased and properly sized under both correct specification and misspecification. We illustrate the method by extending the empirical work reported in Acemoglu, Johnson, Robinson, and Yared (2008, American Economic Review) and Cervellati, Jung, Sunde, and Vischer (2014, American Economic Review). Our results reveal an enormous effect of iterating the GMM estimator, demonstrating the arbitrari ness of using onestep and twostep estimators. Our results also show a large effect of using misspecification robust standard errors instead of the ArellanoBond standard errors. Our results support Acemoglu, Johnson, Robinson, and Yared’s conclusion of an insignificant effect of income on democracy, but reveal that the heterogeneous effects documented by Cervellati, Jung, Sunde, and Vischer are less statistically significant than previously claimed. 
Keywords:  generalized method of moments, misspecification, clustering, robust inference, contraction mapping 
JEL:  C12 C13 C31 C33 C36 
Date:  2018–04 
URL:  http://d.repec.org/n?u=RePEc:swe:wpaper:201807&r=ecm 
By:  Monica Billio (Department of Economics, Cà Foscari University of Venice); Roberto Casarin; Matteo Iacopini 
Abstract:  We propose a new Bayesian Markov switching regression model for multidimensional arrays (tensors) of binary time series. We assume a zeroinflated logit dynamics with timevarying parameters and apply it to multilayer temporal networks. The original contribution is threefold. First, in order to avoid overfitting we propose a parsimonious parametrization of the model, based on a lowrank decomposition of the tensor of regression coefficients. Second, the parameters of the tensor model are driven by a hidden Markov chain, thus allowing for structural changes. The regimes are identified through prior constraints on the mixing probability of the zeroinflated model. Finally, we model the jointly dynamics of the network and of a set of variables of interest. We follow a Bayesian approach to inference, exploiting the PólyaGamma data augmentation scheme for logit models in order to provide an efficient Gibbs sampler for posterior approximation. We show the effectiveness of the sampler on simulated datasets of mediumbig sizes, finally we apply the methodology to a real dataset of financial networks. 
Keywords:  Tensor calculus, tensor decomposition, latent variables, Bayesian statistics, hierarchical prior, networks, zeroinflated model, time series, financial networks 
JEL:  C13 C33 C51 C53 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:ven:wpaper:2018:14&r=ecm 
By:  Victor Chernozhukov; Denis Nekipelov; Vira Semenova; Vasilis Syrgkanis 
Abstract:  We develop a theory for estimation of a highdimensional sparse parameter $\theta$ defined as a minimizer of a population loss function $L_D(\theta,g_0)$ which, in addition to $\theta$, depends on a, potentially infinite dimensional, nuisance parameter $g_0$. Our approach is based on estimating $\theta$ via an $\ell_1$regularized minimization of a sample analog of $L_S(\theta, \hat{g})$, plugging in a firststage estimate $\hat{g}$, computed on a holdout sample. We define a population loss to be (Neyman) orthogonal if the gradient of the loss with respect to $\theta$, has pathwise derivative with respect to $g$ equal to zero, when evaluated at the true parameter and nuisance component. We show that orthogonality implies a secondorder impact of the first stage nuisance error on the second stage target parameter estimate. Our approach applies to both convex and nonconvex losses, albeit the latter case requires a small adaptation of our method with a preliminary estimation step of the target parameter. Our result enables oracle convergence rates for $\theta$ under assumptions on the first stage rates, typically of the order of $n^{1/4}$. We show how such an orthogonal loss can be constructed via a novel orthogonalization process for a general model defined by conditional moment restrictions. We apply our theory to highdimensional versions of standard estimation problems in statistics and econometrics, such as: estimation of conditional moment models with missing data, estimation of structural utilities in games of incomplete information and estimation of treatment effects in regression models with nonlinear link functions. 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1806.04823&r=ecm 
By:  Yuya Sasaki; Takuya Ura 
Abstract:  This paper presents a practical method of estimation and inference for the policyrelevant treatment effects. The policyrelevant treatment effects may not be estimated at the regular parametric $\sqrt{n}$ rate in general (cf. Carneiro, Heckman, and Vytlacil, 2010). In this light, we propose a regularized estimator. The regularization improves the convergence rate when the rate is slower than $\sqrt{n}$. We also develop a method of valid inference based on the regularized estimator. Our proposed method is fully datadriven, and yields point estimates with smaller standard errors. Simulation studies based on the data generating process of Heckman and Vytlacil (2005, pp. 683) demonstrate that our regularized estimator achieves 10 to 300 times as small root mean squared errors as the estimator without regularization. At the same time, our confidence intervals achieve the desired coverage probability in the simulations. 
Date:  2018–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1805.11503&r=ecm 
By:  Benjamin Williams (The George Washington University) 
Abstract:  This paper provides several new results on identification of the linear factor model. The model allows for correlated latent factors and dependence among the idiosyncratic errors. I also illustrate identification under a dedicated measurement structure and other reduced rank restrictions. I use these results to study identification in a model with both observed covariates and latent factors. The analysis emphasizes the different roles played by restrictions on the error covariance matrix, restrictions on the factor loadings and the factor covariance matrix, and restrictions on the coefficients on covariates. The identification results are simple, intuitive, and directly applicable to many settings. 
Keywords:  Latent variables, factor analysis 
JEL:  C38 C31 C36 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:gwc:wpaper:2018002&r=ecm 
By:  Andreas Masuhr 
Abstract:  This paper proposes two (MetropolisHastings) algorithms to estimate Generalized Partition of Unity Copulas (GPUC), a new class of nonparametric copulas that includes the versatile Bernstein Copula as a special case. Additionally a prior distribution for the parameter Matrix of GPUCs is established via Importance Sampling and an algorithm to sample such matrices is introduced. Finally, simulation studies show the effectiveness of the presented algorithms. 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:cqe:wpaper:7318&r=ecm 
By:  Alexandre Belloni; Victor Chernozhukov; Denis Chetverikov; Christian Hansen; Kengo Kato 
Abstract:  This chapter presents key concepts and theoretical results for analyzing estimation and inference in highdimensional models. Highdimensional models are characterized by having a number of unknown parameters that is not vanishingly small relative to the sample size. We first present results in a framework where estimators of parameters of interest may be represented directly as approximate means. Within this context, we review fundamental results including highdimensional central limit theorems, bootstrap approximation of highdimensional limit distributions, and moderate deviation theory. We also review key concepts underlying inference when many parameters are of interest such as multiple testing with familywise error rate or false discovery rate control. We then turn to a general highdimensional minimum distance framework with a special focus on generalized method of moments problems where we present results for estimation and inference about model parameters. The presented results cover a wide array of econometric applications, and we discuss several leading special cases including highdimensional linear regression and linear instrumental variables models to illustrate the general results. 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1806.01888&r=ecm 
By:  Leschinski, Christian; Sibbertsen, Philipp 
Abstract:  We derive the properties of the periodogram local to the zero frequency for a large class of spurious longmemory processes. The periodogram is of crucial importance in this context, since it forms the basis for most commonly used estimation methods for the memory parameter. The class considered nests a wide range of processes such as deterministic or stochastic structural breaks and smooth trends as special cases. Several previous results on these special cases are generalized and extended. All of the spurious longmemory processes considered share the property that their impact on the periodogram at the Fourier frequencies local to the origin is different than that of true longmemory processes. Both types of processes therefore exhibit clearly distinct empirical features. 
Keywords:  Long Memory; Spurious Long Memory; Structural Change 
JEL:  C18 C32 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp632&r=ecm 
By:  Victor Chernozhukov; Wolfgang K. H\"ardle; Chen Huang; Weining Wang 
Abstract:  We consider the estimation and inference in a system of highdimensional regression equations allowing for temporal and crosssectional dependency in covariates and error processes, covering rather general forms of weak dependence. A sequence of largescale regressions with LASSO is applied to reduce the dimensionality, and an overall penalty level is carefully chosen by a block multiplier bootstrap procedure to account for multiplicity of the equations and dependencies in the data. Correspondingly, oracle properties with a jointly selected tuning parameter are derived. We further provide highquality debiased simultaneous inference on the many target parameters of the system. We provide bootstrap consistency results of the test procedure, which are based on a general Bahadur representation for the $Z$estimators with dependent data. Simulations demonstrate good performance of the proposed inference procedure. Finally, we apply the method to quantify spillover effects of textual sentiment indices in a financial market and to test the connectedness among sectors. 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1806.05081&r=ecm 
By:  Matteo Iacopini (Department of Economics, Cà Foscari University of Venice); Dominique Guégan 
Abstract:  The study of dependence between random variables is the core of theoretical and applied statistics. Static and dynamic copula models are useful for describing the dependence structure, which is fully encrypted in the copula probability density function. However, these models are not always able to describe the temporal change of the dependence patterns, which is a key characteristic of financial data. We propose a novel nonparametric framework for modelling a time series of copula probability density functions, which allows to forecast the entire function without the need of postprocessing procedures to grant positiveness and unit integral. We exploit a suitable isometry that allows to transfer the analysis in a subset of the space of square integrable functions, where we build on nonparametric functional data analysis techniques to perform the analysis. The framework does not assume the densities to belong to any parametric family and it can be successfully applied also to general multivariate probability density functions with bounded or unbounded support. Finally, a noteworthy field of application pertains the study of time varying networks represented through vine copula models. We apply the proposed methodology for estimating and forecasting the time varying dependence structure between the S&P500 and NASDAQ indices. 
Keywords:  Functional data analysis, functional PCA, functional time series, time varying dependence, time varying copula, clr transform, compositional data analysis 
JEL:  C13 C33 C51 C53 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:ven:wpaper:2018:15&r=ecm 
By:  Knaus, Michael C. (University of St. Gallen) 
Abstract:  This study investigates the doseresponse effects of making music on youth development. Identification is based on the conditional independence assumption and estimation is implemented using a recent double machine learning estimator. The study proposes solutions to two highly practically relevant questions that arise for these new methods: (i) How to investigate sensitivity of estimates to tuning parameter choices in the machine learning part? (ii) How to assess covariate balancing in highdimensional settings? The results show that improvements in objectively measured cognitive skills require at least medium intensity, while improvements in school grades are already observed for low intensity of practice. 
Keywords:  double machine learning, extracurricular activities, music, cognitive and noncognitive skills, youth development 
JEL:  J24 Z11 C21 C31 
Date:  2018–05 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp11547&r=ecm 
By:  Vanessa Berenguer Rico; Ines Wilms 
Abstract:  Abstract Given the effect that outliers can have on regression and specification testing, a vastly used robustification strategy by practitioners consists in: (i) starting the empirical analysis with an outlier detection procedure to deselect atypical data values; then (ii) continuing the analysis with the selected nonoutlying observations. The repercussions of such robustifying procedure on the asymptotic properties of subsequent specification tests are, however, underexplored. We study the effects of such a strategy on the White test for heteroscedasticity. Using weighted and marked empirical processes of residuals theory, we show that the White test implemented after the outlier detection and removal is asymptotically chisquare if the underlying errors are symmetric. Under asymmetric errors, the standard chisquare distribution will not always be asymptotically valid. In a simulation study, we show that  depending on the type of data contamination  the standard White test can be either severely undersized or oversized, as well as have trivial power. The statistic applied after deselecting outliers has good finite sample properties under symmetry but can suffer from size distortions under asymmetric errors. 
Keywords:  Asymptotic theory, Empirical processes, Heteroscedasticity, Marked and Weighted Empirical processes, Outlier detection, Robust Statistics, White test 
JEL:  C01 C10 
Date:  2018–06–19 
URL:  http://d.repec.org/n?u=RePEc:oxf:wpaper:853&r=ecm 
By:  Benjamin Williams (The George Washington University) 
Abstract:  In this paper I study identification of a nonseparable model with endogeneity arising due to unobserved heterogeneity. Identification relies on the availability of binary proxies that can be used to control for the unobserved heterogeneity. I show that the model is identified in the limit as the number of proxies increases. The argument does not require an instrumental variable that is excluded from the outcome equation nor does it require the support of the unobserved heterogeneity to be finite. I then propose a nonparametric estimator that is consistent as the number of proxies increases with the sample size. I also show that, for a fixed number of proxies, nontrivial bounds on objects of interest can be obtained. Finally, I study two real data applications that illustrate computation of the bounds and estimation with a large number of items. 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:gwc:wpaper:2018003&r=ecm 
By:  Nicky L. Grant; Richard J. Smith 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:man:sespap:1802&r=ecm 
By:  Andreas Tryphonides 
Abstract:  Economists are often confronted with the choice between a completely specified, yet approximate model and an incomplete model that only imposes a set of behavioral restrictions that are believed to be true. We offer a reconciliation of these approaches and demonstrate its usefulness for estimation and economic inference. The approximate model, which can be structural or statistical, is distorted such that to satisfy the equilibrium conditions which are deemed as credible. We provide the relevant asymptotic theory and supportive simulation evidence on the MSE performance in small samples. We illustrate that it is feasible to do counterfactual experiments without explicitly solving for the equilibrium law of motion. We apply the methodology to the model of long run risks in aggregate consumption (Bansal and Yaron, 2004), where the auxiliary model is generated using the Campbell and Shiller (1988) approximation. Using US data, we investigate the empirical importance of the neglected nonlinearity. We find that the distorted model is strongly preferred by the data and substantially improves the identification of the structural parameters. More importantly, it completely overturns key qualitative predictions of the linear model, such as the absence of endogenous time variation in risk premia and level effects, which is crucial for understanding the link between asset prices and macroeconomic risk. 
Date:  2018–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1805.10869&r=ecm 
By:  Michael Pfarrhofer; Philipp Piribauer 
Abstract:  This article introduces two absolutely continuous globallocal shrinkage priors to enable stochastic variable selection in the context of highdimensional matrix exponential spatial specifications. Existing approaches as a means to dealing with overparameterization problems in spatial autoregressive specifications typically rely on computationally demanding Bayesian modelaveraging techniques. The proposed shrinkage priors can be implemented using Markov chain Monte Carlo methods in a flexible and efficient way. A simulation study is conducted to evaluate the performance of each of the shrinkage priors. Results suggest that they perform particularly well in highdimensional environments, especially when the number of parameters to estimate exceeds the number of observations. For an empirical illustration we use panEuropean regional economic growth data. 
Date:  2018–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1805.10822&r=ecm 
By:  Max TabordMeehan 
Abstract:  This paper proposes a twostage adaptive randomization procedure for randomized controlled trials. The method uses data from a firststage pilot experiment to determine how to stratify in a second wave of the experiment, where the objective is to minimize the variance of an estimator for the average treatment effect (ATE). We consider selection from a class of stratified randomization procedures which we call stratification trees: these are procedures whose strata can be represented as decision trees, with differing treatment assignment probabilities across strata. By using the pilot to estimate a stratification tree, we simultaneously select which covariates to use for stratification, how to stratify over these covariates, as well as the assignment probabilities within these strata. Our main result shows that using this randomization procedure with an appropriate estimator results in an asymptotic variance which minimizes the variance bound for estimating the ATE, over an optimal stratification of the covariate space. Moreover, by extending techniques developed in Bugni et al. (2018), the results we present are able to accommodate a large class of assignment mechanisms within strata, including stratified block randomization. We also present extensions of the procedure to the setting of multiple treatments, and to the targeting of subgroupspecific effects. In a simulation study, we find that our method is most effective when the response model exhibits some amount of "sparsity" with respect to the covariates, but can be effective in other contexts as well, as long as the pilot sample size used to estimate the stratification tree is not prohibitively small. We conclude by applying our method to the study in Karlan and Wood (2017), where we estimate a stratification tree using the first wave of their experiment. 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1806.05127&r=ecm 
By:  Igor L. Kheifets; Pentti J. Saikkonen 
Abstract:  Smooth transition autoregressive models are widely used to capture nonlinearities in univariate and multivariate time series. Existence of stationary solution is typically assumed, implicitly or explicitly. In this paper we describe conditions for stationarity and ergodicity of vector STAR models. The key condition is that the joint spectral radius of certain matrices is below 1, which is not guaranteed if only separate spectral radii are below 1. Our result allows to use recently introduced toolboxes from computational mathematics to verify the stationarity and ergodicity of vector STAR models. 
Date:  2018–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1805.11311&r=ecm 
By:  Aronsson, Thomas (Department of Economics, Umeå University); Jenderny, Katharina (Department of Economics, Umeå University); Lanot, Gauthier (Department of Economics, Umeå University) 
Abstract:  We propose a maximum likelihood (ML) based method to improve the bunching approach of measuring the elasticity of taxable income (ETI), and derive the estimator for several model settings that are prevalent in the literature, such as perfect bunching, bunching with optimization frictions, notches, and heterogeneity in the ETI. We show that the ML estimator is more precise and likely less biased than adhoc bunching estimators that are typically used in the literature. In the case of optimization frictions in the form of random shocks to earnings, the ML estimation requires a prior of the average size of such shocks. The results obtained in the presence of a notch can differ substantially from those obtained using adhoc approaches. If there is heterogeneity in the ETI, the elasticity of the individuals who bunch exceeds the average elasticity in the population. 
Keywords:  Bunching Estimators; Elasticity of Taxable Income; Income Tax 
JEL:  C51 H24 H31 
Date:  2018–06–19 
URL:  http://d.repec.org/n?u=RePEc:hhs:umnees:0956&r=ecm 
By:  David Dale (Yandex); Andrei Sirchenko (National Research University Higher School of Economics) 
Abstract:  We introduce three new STATA commands, nop, ziop2 and ziop3, for the estimation of a threepart nested ordered probit model, the twopart zeroinflated ordered probit models of Harris and Zhao (2007, Journal of Econometrics 141: 10731099) and Brooks, Harris and Spencer (2012, Economics Letters 117: 683686), and a threepart zeroinflated ordered probit model for ordinal outcomes, with both exogenous and endogenous switching. The threepart models allow the probabilities of positive, neutral (zero) and negative outcomes to be generated by distinct processes. The zeroinflated models address a preponderance of zeros and allow them to emerge in different latent regimes. We provide postestimation commands to compute probabilistic predictions and various measures of their accuracy, to access the goodness of fit, and to perform model comparison using the Vuong test (Vuong 1989, Econometrica 57: 307333) with the corrections based on the Akaike and Schwarz information criteria. We investigate the finitesample performance of the maximum likelihood estimators by Monte Carlo simulations, discuss the relations among the models, and illustrate the new commands with an empirical application to the U.S. federal funds rate target. 
Keywords:  ordinal outcomes, zero inflation, nested ordered probit, zeroinflated ordered probit, endogenous switching, Vuong test, nop, ziop2, ziop3, federal funds rate target. 
JEL:  Z 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:hig:wpaper:193/ec/2018&r=ecm 
By:  Brigham R. Frandsen; Lars J. Lefgren 
Abstract:  We bound the distribution of treatment effects under plausible and testable assumptions on the joint distribution of potential outcomes, namely that potential outcomes are mutually stochastically increasing. We show how to test the empirical restrictions implied by those assumptions. The resulting bounds substantially sharpen bounds based on classical inequalities. We apply our method to estimate bounds on the distribution of effects of attending a Knowledge is Power Program (KIPP) charter school on student academic achievement, and find that a substantial majority of students' math achievement benefitted from attendance, especially those who would have fared poorly in a traditional classroom. 
JEL:  C01 I21 
Date:  2018–05 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:24616&r=ecm 
By:  Golubev, Yuri; Safarian, Mher M. 
Abstract:  Let X1,X2,… be independent random variables observed sequentially and such that X1,…,Xθ−1 have a common probability density p0, while Xθ,Xθ+1,… are all distributed according to p1≠p0. It is assumed that p0 and p1 are known, but the time change θ∈Z+ is unknown and the goal is to construct a stopping time τ that detects the changepoint θ as soon as possible. The existing approaches to this problem rely essentially on some a priori information about θ. For instance, in Bayes approaches, it is assumed that θ is a random variable with a known probability distribution. In methods related to hypothesis testing, this a priori information is hidden in the socalled average run length. The main goal in this paper is to construct stopping times which do not make use of a priori information about θ, but have nearly Bayesian detection delays. More precisely, we propose stopping times solving approximately the following problem: Δ(θ;τα)→minτα subject to α(θ;τα)≤α for any θ≥1, where α(θ;τ)=Pθ{τ 
Keywords:  stopping time,false alarm probability,average detection delay,Bayes stopping time,CUSUM method,multiple hypothesis testing 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:zbw:kitwps:116&r=ecm 
By:  Anastasiou, Andreas 
Abstract:  The asymptotic normality of the Maximum Likelihood Estimator (MLE) is a long established result. Explicit bounds for the distributional distance between the distribution of the MLE and the normal distribution have recently been obtained for the case of independent random variables. In this paper, a local dependence structure is introduced between the random variables and we give upper bounds which are specified for the Wasserstein metric. 
Keywords:  Maximum likelihood estimatorDependent random variablesNormal approximationStein’s method 
JEL:  C1 
Date:  2017–06–09 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:83635&r=ecm 
By:  Zhu, Yajing; Steele, Fiona; Moustaki, Irini 
Abstract:  The 3step approach has been recently advocated over the simultaneous 1step approach to model a distal outcome predicted by a latent categorical variable. We generalize the 3step approach to situations where the distal outcome is predicted by multiple and possibly associated latent categorical variables. Although the simultaneous 1step approach has been criticized, simulation studies have found that the performance of the two approaches is similar in most situations (Bakk & Vermunt, 2016). This is consistent with our findings for a 2LV extension when all model assumptions are satisfied. Results also indicate that under various degrees of violation of the normality and conditional independence assumption for the distal outcome and indicators, both approaches are subject to bias but the 3step approach is less sensitive. The differences in estimates using the two approaches are illustrated in an analysis of the effects of various childhood socioeconomic circumstances on body mass index at age 50. 
Keywords:  latent class analysis; multiple latent variables; robustness; 3step approach 
JEL:  C1 
Date:  2017–06–06 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:81850&r=ecm 
By:  Jackson, Emerson Abraham 
Abstract:  This research article was championed as a way of providing discourses pertaining to the concept of "Critical Realism (CR)" approach, which is amongst many othe forms of competing postmodern philosophical concepts for the engagement of dialogical discourses in the area of established econonetric methodologies for effective policy prescription in the economic science discipline. On the the whole, there is no doubt surrounding the value of empirical endeavours in econometrics to address real world economic problems, but equally so, the heavy weighted use and reliance on mathematical contents as a way of justifying its scientific base seemed to be loosing traction of the intended focus of economics when it comes to confronting real world problems in the domain of social interaction. In this vein, the construction of mixed methods discourse(s), which favour that of CR philosophy is hereby suggested in this article as a way forward in confronting with issues raised by critics of mainstream economics and other professionals in the postmodern era. 
Keywords:  Theoretical, Methodological Intervention, Postmodern, Critical Realism, Econometrics 
JEL:  A12 B50 C18 
Date:  2018–05–18 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:86796&r=ecm 
By:  Karol Gellert (Finance Discipline Group, UTS Business School, University of Technology Sydney); Erik Schlögl (Finance Discipline Group, UTS Business School, University of Technology Sydney) 
Abstract:  This paper presents the construction of a particle filter, which incorporates elements inspired by genetic algorithms, in order to achieve accelerated adaptation of the estimated posterior distribution to changes in model parameters. Specifically, the filter is designed for the situation where the subsequent data in online sequential filtering does not match the model posterior filtered based on data up to a current point in time. The examples considered encompass parameter regime shifts and stochastic volatility. The filter adapts to regime shifts extremely rapidly and delivers a clear heuristic for distinguishing between regime shifts and stochastic volatility, even though the model dynamics assumed by the filter exhibit neither of those features. 
Date:  2018–06–01 
URL:  http://d.repec.org/n?u=RePEc:uts:rpaper:392&r=ecm 