
on Econometrics 
By:  Bartolucci, Francesco; Pennoni, Fulvia; Vittadini, Giorgio 
Abstract:  We extend to the longitudinal setting a latent class approach that has beed recently introduced by \cite{lanza:et:al:2013} to estimate the causal effect of a treatment. The proposed approach permits the evaluation of the effect of multiple treatments on subpopulations of individuals from a dynamic perspective, as it relies on a Latent Markov (LM) model that is estimated taking into account propensity score weights based on individual pretreatment covariates. These weights are involved in the expression of the likelihood function of the LM model and allow us to balance the groups receiving different treatments. This likelihood function is maximized through a modified version of the traditional expectationmaximization algorithm, while standard errors for the parameter estimates are obtained by a nonparametric bootstrap method. We study in detail the asymptotic properties of the causal effect estimator based on the maximization of this likelihood function and we illustrate its finite sample properties through a series of simulations showing that the estimator has the expected behavior. As an illustration, we consider an application aimed at assessing the relative effectiveness of certain degree programs on the basis of three ordinal response variables when the work path of a graduate is considered as the manifestation of his/her human capital level across time. 
Keywords:  Causal inference, ExpectationMaximization algorithm, Hidden Markov models, Multiple treatments, Policy evaluation, Propensity score. 
JEL:  C1 C52 C53 C54 I23 J44 
Date:  2015–08 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:66492&r=all 
By:  Ulrich K. Müller; Mark W. Watson 
Abstract:  Many questions in economics involve longrun or trend variation and covariation in time series. Yet, time series of typical lengths contain only limited information about this longrun variation. This paper suggests that longrun sample information can be isolated using a small number of lowfrequency trigonometric weighted averages, which in turn can be used to conduct inference about longrun variability and covariability. Because the lowfrequency weighted averages have large sample normal distributions, large sample valid inference can often be conducted using familiar small sample normal inference procedures. Moreover, the general approach is applicable for a wide range of persistent stochastic processes that go beyond the familiar I(0) and I(1) models. 
JEL:  C12 C22 C32 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:21564&r=all 
By:  Keita, Moussa 
Abstract:  This manuscript provides an introduction to the econometric analysis. It is articulated around two parts structured on six chapters. The first part, devoted to the study of the linear model, consists of four chapters. The first presents some useful statistical concepts in Econometrics while the second and the third chapter is devoted to the study of simple linear model and multiple linear model. Concerning the fourth chapter, it focuses on the study of the generalized linear model (used in case of violation of certain standard assumptions of the linear model). In this first part, a particular emphasis is put on estimation methods such as ordinary least squares and maximum likelihood method. A broad discussion is also conducted on inference techniques and approaches to hypothesis testing. The second part of the work is devoted to the study of qualitative dependent variable models. In this part, two classes of models are considered: those of dichotomous dependent variable models (standard probit and logit model) and the models of polytomous dependent variable (ordered and unordered multinomial probit and logit models). 
Keywords:  Econometrics, Linear model, qualitative variable, logit, probit, OLS, Maximum Likelihood 
JEL:  C1 C2 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:66840&r=all 
By:  E. Otranto 
Abstract:  Very often time series are subject to abrupt changes in the level, which are generally represented by Markov Switching (MS) models, hypothesizing that the level is constant within a certain state (regime). This is not a realistic framework because in the same regime the level could change with minor jumps with respect to a change of state; this is a typical situation in many economic time series, such as the Gross Domestic Product or the volatility of financial markets. We propose to make the state flexible, introducing a very general model which provides oscillations of the level of the time series within each state of the MS model; these movements are driven by a forcing variable. The flexibility of the model allows for consideration of extreme jumps in a parsimonious way (also in the simplest 2state case), without the adoption of a larger number of regimes; moreover this model increases the interpretability and fitting of the data with respect to the analogous MS model. This approach can be applied in several fields, also using unobservable data. We show its advantages in three distinct applications, involving macroeconomic variables, volatilities of financial markets and conditional correlations. 
Keywords:  abrupt changes, goodness of fit, Hamilton filter, smoothed changes, time–varying parameters 
JEL:  C22 C32 C5 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:cns:cnscwp:201509&r=all 
By:  Marmer, Vadim; Yu, Zhengfei 
Abstract:  This paper considers efficient inference for the coefficient on the endogenous variable in linear regression models with weak instrumental variables (WeakIV) and homoskedastic errors. We focus on the alternative hypothesis determined by an arbitrarily large deviation from the null hypothesis. The efficient rotationinvariant and asymptotically similar test turns out to be infeasible as it depends on the unknown correlation between structural and firststage errors (the degree of endogeneity). We compare the asymptotic power properties of popular WeakIVrobust tests, focusing on the AndersonRubin (AR) and the Conditional Likelihood Ratio (CLR) tests. We find that their relative power performance depends on the degree of endogeneity in the model and the number of IVs. Unexpectedly, the AR test outperforms the CLR when the degree of endogeneity is small and the number of IVs is large. We also describe a test that is optimal when IVs are strong and, when IVs are weak, has the same asymptotic power as the AR test against arbitrarily large deviations from the null. 
Keywords:  weak instruments; arbitrarily large deviations; power envelope; power comparisons 
Date:  2015–09–01 
URL:  http://d.repec.org/n?u=RePEc:ubc:pmicro:vadim_marmer201517&r=all 
By:  Yoonseok Lee (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Yu Zhou (School of Economics, Fudan University, 600 Guoquan Road, Shanghai 200433) 
Abstract:  We develop averaged instrumental variables estimators as a way to deal with many weak instruments. We propose a weighted average of the preliminary kclass estimators, where each estimator is obtained using different subsets of the available instrumental variables. The averaged estimators are shown to be consistent and to satisfy asymptotic normality. Furthermore, its approximate mean squared error reveals that using a small number of instruments for each preliminary kclass estimator reduces the finite sample bias, while averaging prevents the variance from inflating. Monte Carlo simulations find that the averaged estimators compare favorably with alternative instrumentalvariableselection approaches when the strength levels of individual IV are similar with each other. 
Keywords:  Averaged estimator, many weak instruments, class estimator 
JEL:  C26 C36 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:180&r=all 
By:  Bernard M.S. van Praag (University of Amsterdam, the Netherlands) 
Abstract:  Mundlak (1978) proposed the addition of time averages to the usual panel equation in order to remove the fixed effects bias. We extend this Mundlak equation further by replacing the timevarying explanatory variables by the corresponding deviations from the averages over time, while keeping the time averages in the equation. It appears that regression on this extended equation provides simultaneously the within and the in between estimator, while the pooled data estimator is a weighted average of the within and inbetween estimator. In Section 3 we introduce observed and unobserved fixed effects In Section 4 we demonstrate that in this extended setup Probit  estimation on panel data sets does not pose a specific problem. The usual software will do. In Section 5 we give an empirical example. 
Keywords:  Panel data estimation techniques; ordered probit; fixed effectsestimator; withinestimator; pooled regression; betweenestimator 
JEL:  C23 C25 
Date:  2015–09–18 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20150112&r=all 
By:  Karlsson, Sune (Örebro University School of Business); Temesgen, Asrat (Örebro University School of Business) 
Abstract:  This paper considers Bayesian inference procedures for regression models with ordinally observed explanatory variables. Taking advantage of a latent variable interpretation of the ordinally observed variable we develop an efficient Bayesian inference procedure that estimates the regression model of interest jointly with an auxiliary ordered probit model for the unobserved latent variable. The properties of the inference procedure and associated MCMC algorithm are assessed using simulated data. We illustrate our approach in an investigation of gender based wage discrimination in the Swedish labor market and find evidence of wage discrimination. 
Keywords:  Markov Chain Monte Carlo; latent variables; ordered probit; wage discrimination 
JEL:  C11 C25 C35 J31 
Date:  2015–09–18 
URL:  http://d.repec.org/n?u=RePEc:hhs:oruesi:2015_009&r=all 
By:  Yoichi Arai (National Graduate Institute for Policy Studies (GRIPS)); Hidehiko Ichimura (Faculty of Economics, The University of Tokyo) 
Abstract:  A new bandwidth selection method for the fuzzy regression discontinuity estimator is proposed. The method chooses two bandwidths simultaneously, one for each side of the cutoff point by using a criterion based on the estimated asymptotic mean square error taking into account a secondorder bias term. A simulation study demonstrates the usefulness of the proposed method.  
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2015cf990&r=all 
By:  Yoosoon Chang (Department of Economics, Indiana University); Chang Sik Kim (Department of Economics, Sungkyunkwan University); J. Isaac Miller (Department of Economics, University of Missouri); Joon Y. Park (Department of Economics, Indiana University and Sungkyunkwan University); Sungkeun Park (Korea Institute for Industrial Economics and Trade) 
Abstract:  We analyze a time series of global temperature anomaly distributions to identify and estimate persistent features in climate change. In our study, temperature densities, obtained from globally distributed data over the period from 1850 to 2012, are regarded as a time series of functional observations that are changing over time. We employ a formal test for the existence of functional unit roots in the time series of these densities. Further, we develop a new test to distinguish functional unit roots from functional deterministic trends or explosive behavior. We ?nd some persistent features in global temperature anomalies, which are attributed in particular to signi?cant portions of mean and variance changes in their crosssectional distributions. We detect persistence that is characteristic of a unit root process, but none of the persistence appears to be deterministic or explosive. 
Keywords:  climate change, temperature distribution, global temperature trends, functional unit roots 
JEL:  C13 C23 Q54 
Date:  2015–09–09 
URL:  http://d.repec.org/n?u=RePEc:umc:wpaper:1513&r=all 
By:  Quiroz, Matias (Research Department, Central Bank of Sweden) 
Abstract:  The complexity of Markov Chain Monte Carlo (MCMC) algorithms arises from the requirement of a likelihood evaluation for the full data set in each iteration. Payne and Mallick (2014) propose to speed up the MetropolisHastings algorithm by a delayed acceptance approach where the acceptance decision proceeds in two stages. In the rst stage, an estimate of the likelihood based on a random subsample determines if it is likely that the draw will be accepted and, if so, the second stage uses the full data likelihood to decide upon nal acceptance. Evaluating the full data likelihood is thus avoided for draws that are unlikely to be accepted. We propose a more precise likelihood estimator which incorporates auxiliary information about the full data likelihood while only operating on a sparse set of the data. It is proved that the resulting delayed acceptance MCMC is asymptotically more ecient compared to that of Payne and Mallick (2014). Furthermore, we adapt the method to handle data sets that are too large to t in RandomAccess Memory (RAM). This adaptation results in an algorithm that samples from an approximate posterior with well studied theoretical properties in the literature. 
Keywords:  Bayesian inference; Markov chain Monte Carlo; Delayed acceptance MCMC; Large data; Survey sampling 
JEL:  C11 
Date:  2015–08–01 
URL:  http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0307&r=all 
By:  João M. C. Santos Silva; Silvana Tenreyro; Frank Windmeijer 
Abstract:  In economic applications it is often the case that the variate of interest is nonnegative and its distribution has a masspoint at zero. Many regression strategies have been proposed to deal with data of this type but, although there has been a long debate in the literature on the appropriateness of different models, formal statistical tests to choose between the competing specifications are not often used in practice. We use the nonnested hypothesis testing framework of Davidson and MacKinnon (Davidson and MacKinnon 1981. “Several Tests for Model Specification in the Presence of Alternative Hypotheses.” Econometrica 49: 781–793.) to develop a novel and simple regressionbased specification test that can be used to discriminate between these models. 
Keywords:  health economics; international trade; nonnested hypotheses; C test; P test 
JEL:  C12 C52 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:63663&r=all 
By:  Christian Westermeier; Markus M. Grabka 
Abstract:  Statistical Analysis in surveys is generally facing missing data. In longitudinal studies for some missing values there might be past or future data points available. The question arises how to successfully transform this advantage into improvedimputation strategies. In a simulation study the authors compare six combinations of cross‐sectional and longitudinal imputation strategies for German wealth panel data. The authors create simulation data sets by blanking out observed data points: they induce item non response by a missing at random (MAR) and two differential nonresponse (DNR) mechanisms. We test the performance of multiple imputation using chained equations (MICE), an imputation procedure for panel data known as the row‐and‐column method and a regression prediction with correction for sample selection. The regression and MICE approaches serve as fallback methods, when only cross‐sectional data is available. The row‐and‐column method performs surprisingly well considering the cross‐sectional evaluation criteria. For trend estimates and the measurement of inequality, combining MICE with the row‐and‐column technique regularly improves the results based on a catalogue of six evaluation criteria including three separate inequality indices. As for wealth mobility, two additional criteria show that a model based approach such as MICE might be the preferable choice. Overall the results show that if the variables, which ought to be imputed, are highly skewed; the row‐and‐column technique should not be dismissed beforehand. 
Keywords:  Panel data, SOEP survey, evaluation, simulation, missing at random, item non‐response 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:diw:diwsop:diw_sp790&r=all 
By:  Riccardo Junior Buonocore; Tomaso Aste; Tiziana Di Matteo 
Abstract:  We discuss the origin of multiscaling in financial timeseries and investigate how to best quantify it. Our methodology consists in separating the different sources of measured multifractality by analysing the multi/uniscaling behaviour of synthetic timeseries with known properties. We use the results from the synthetic timeseries to interpret the measure of multifractality of real logreturns timeseries. The main finding is that the aggregation horizon of the returns can introduce a strong bias effect on the measure of multifractality. This effect can become especially important when returns distributions have power law tails with exponents in the range [2,5]. We discuss the right aggregation horizon to mitigate this bias. 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1509.05471&r=all 
By:  Gautier Marti (Ecole Polytechnique [Palaiseau]  Ecole Polytechnique, Hellebore Capital Management); Philippe Very (Hellebore Capital Management); Philippe Donnat (Hellebore Capital Management) 
Abstract:  This paper presents a preprocessing and a distance which improve the performance of machine learning algorithms working on independent and identically distributed stochastic processes. We introduce a novel nonparametric approach to represent random variables which splits apart dependency and distribution without losing any information. We also propound an associated metric leveraging this representation and its statistical estimate. Besides experiments on synthetic datasets, the benefits of our contribution is illustrated through the example of clustering financial time series, for instance prices from the credit default swaps market. Results are available on the website www.datagrapple.com and an IPython Notebook tutorial is available at www.datagrapple.com/Tech for reproducible research. 
Date:  2015–09–14 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal01196883&r=all 
By:  Laura Coroneo; Fabrizio Iacone 
Abstract:  We consider fixedb and fixedm asymptotics for the Diebold and Mariano (1995) test of predictive accuracy. We show that this approach allows to obtain predictive accuracy tests that are correctly sized even in small samples. We apply the alternative asympotics for the Diebold and Mariano (1995) test to evaluate the predictive accuracy of the Survey of Professional Forecasters (SPF) against a simple random walk. Our results show that the predictive ability of the SPF was partially spurious, especially in the last decade. 
Keywords:  Diebold and Mariano test, long run variance estimation, fixedb and fixedm asymptotic theory, SPF. 
JEL:  C12 C32 C53 E17 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:yor:yorken:15/15&r=all 
By:  Marjolein Fokkema; Niels Smits; Achim Zeileis; Torsten Hothorn; Henk Kelderman 
Abstract:  Identification of subgroups of patients for which treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Several treebased algorithms have been developed for the detection of such treatmentsubgroup interactions. In many instances, however, datasets may have a clustered structure, where observations are clustered within, for example, research centers, studies or persons. In the current paper we propose a new algorithm, generalized linear mixedeffects model (GLMM) trees, that allows for detection of treatmentsubgroup interactions, as well as estimation of clusterspecific random effects. The algorithm uses modelbased recursive partitioning (MOB) to detect treatmentsubgroup interactions, and a GLMM for the estimation of randomeffects parameters. In a simulation study, we evaluate the performance of GLMM tree and compare it with that of MOB without randomeffects estimation. GLMM tree was found to have a much lower Type I error rate than MOB trees without random effects (4% and 33%, respectively). Furthermore, in datasets with treatmentsubgroup interactions, GLMM tree recovered the true treatment subgroups much more often than MOB without random effects (in 90% and 61% of the datasets, respectively). Also, GLMM tree predicted treatment outcome differences more accurately than MOB without random effects (average predictive accuracy of .94 and .88, respectively). We illustrate the application of GLMM tree on a patientlevel dataset of a metaanalysis on the effects of psycho and pharmacotherapy for depression. We conclude that GLMM tree is a promising algorithm for the detection of treatmentsubgroup interactions in clustered datasets. 
Keywords:  modelbased recursive partitioning, treatmentsubgroup interactions, random effects, generalized linear mixedeffects model, classification and regression trees 
JEL:  C14 C45 C87 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:inn:wpaper:201510&r=all 
By:  Arellano, Manuel (CEMFI, Madrid); Blundell, Richard (University College London); Bonhomme, Stephane (University of Chicago) 
Abstract:  We develop a new quantilebased panel data framework to study the nature of income persistence and the transmission of income shocks to consumption. Logearnings are the sum of a general Markovian persistent component and a transitory innovation. The persistence of past shocks to earnings is allowed to vary according to the size and sign of the current shock. Consumption is modeled as an agedependent nonlinear function of assets and the two earnings components. We establish the nonparametric identification of the nonlinear earnings process and the consumption policy rule. Exploiting the enhanced consumption and asset data in recent waves of the Panel Study of Income Dynamics, we find nonlinear persistence and conditional skewness to be key features of the earnings process. We show that the impact of earnings shocks varies substantially across earnings histories, and that this nonlinearity drives heterogeneous consumption responses. The transmission of shocks is found to vary systematically with assets. 
Keywords:  earnings dynamics, consumption, panel data, quantile regression, latent variables 
JEL:  C23 D31 D91 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp9344&r=all 
By:  Philipp Gschöpf; Wolfgang Karl Härdle; Andrija Mihoci; 
Abstract:  A flexible framework for the analysis of tail events is proposed. The framework contains tail moment measures that allow for Expected Shortfall (ES) estimation. Connecting the implied tail thickness of a family of distributions with the quantile and expectile estimation, a platform for risk assessment is provided. ES and implications for tail events under different distributional scenarios are investigated, particularly we discuss the implications of increased tail risk for mixture distributions. Empirical results from the US, German and UK stock markets, as well as for the selected currencies indicate that ES can be successfully estimated on a daily basis using a oneyear time horizon across different risk levels. 
Keywords:  Expected Shortfall, expectiles, tail risk, risk management, tail events, tail moments 
JEL:  C13 C16 G20 G28 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2015047&r=all 
By:  Qian, Hang 
Abstract:  The standard Kalman filter cannot handle inequality constraints imposed on the state variables, as state truncation induces a nonlinear and nonGaussian model. We propose a RaoBlackwellised particle filter with the optimal importance function for forward filtering and the likelihood function evaluation. The particle filter effectively enforces the state constraints when the Kalman filter violates them. We find substantial Monte Carlo variance reduction by using the optimal importance function and RaoBlackwellisation, in which the Gaussian linear substructure is exploited at both the crosssectional and temporal levels. 
Keywords:  RaoBlackwellisation, Kalman filter, Particle filter, Sequential Monte Carlo 
JEL:  C32 C53 
Date:  2015–09–03 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:66447&r=all 
By:  João Santos Silva (University of Surrey) 
Abstract:  Quantile regression is increasingly used by practitioners, but there are still some misconceptions about how difficult it is to obtain valid standard errors in this context. In this presentation I discuss the estimation of the covariance matrix of the quantile regression estimator, focusing special attention on the case where the regression errors may be heteroskedastic and/or â€œclusteredâ€. Specification tests to detect heteroskedasticity and intracluster correlation are discussed, and small simulation studies illustrate the finite sample performance of the tests and of the covariance matrix estimators. The presentation concludes with a brief description of qreg2, which is a wrapper for qreg that implements all the methods discussed in the presentation. 
Date:  2015–09–16 
URL:  http://d.repec.org/n?u=RePEc:boc:usug15:10&r=all 
By:  Canova, Fabio; Ferroni, Filippo; Matthes, Christian 
Abstract:  The paper studies how parameter variation affects the decision rules of a DSGE model and structural inference. We provide diagnostics to detect parameter variations and to ascertain whether they are exogenous or endogenous. Identification and inferential distortions when a constant parameter model is incorrectly assumed are examined. Likelihood and VARbased estimates of the structural dynamics when parameter variations are neglected are compared. Time variations in the financial frictions of a Gertler and Karadi's (2010) model are studied. 
Keywords:  endogenous variations; misspecification; Structural model; time varying coefficients 
JEL:  C10 E27 E32 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:10803&r=all 
By:  WenJie Xie; ZhiQiang Jiang; GaoFeng Gu; Xiong Xiong; WeiXing Zhou 
Abstract:  Many complex systems generate multifractal time series which are longrange crosscorrelated. Numerous methods have been proposed to characterize the multifractal nature of these longrange cross correlations. However, several important issues about these methods are not well understood and most methods consider only one moment order. We study the joint multifractal analysis based on partition function with two moment orders, which was initially invented to investigate fluid fields, and derive analytically several important properties. We apply the method numerically to binomial measures with multifractal cross correlations and bivariate fractional Brownian motions without multifractal cross correlations. For binomial multifractal measures, the explicit expressions of mass function, singularity strength and multifractal spectrum of the cross correlations are derived, which agree excellently with the numerical results. We also apply the method to stock market indexes and unveil intriguing multifractality in the cross correlations of index volatilities. 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1509.05952&r=all 
By:  Black, Dan A. (University of Chicago); Joo, Joonhwi (University of Chicago); LaLonde, Robert J. (Harris School, University of Chicago); Smith, Jeffrey A. (University of Michigan); Taylor, Evan J. (University of Michigan) 
Abstract:  We provide simple tests for selection on unobserved variables in the VytlacilImbensAngrist framework for Local Average Treatment Effects. The tests allow researchers not only to test for selection on either or both of the treated and untreated outcomes, but also to assess the magnitude of the selection effect. The tests are quite simple; undergraduates after an introductory econometrics class should be able to implement these tests. We illustrate our tests with two empirical applications: the impact of children on female labor supply from Angrist and Evans (1998) and the training of adult women from the Job Training Partnership Act (JTPA) experiment. 
Keywords:  local average treatment effects, selection, instrumental variables 
JEL:  C10 C18 J01 J08 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp9346&r=all 
By:  LIFEI HUANG (Ming Chuan University) 
Abstract:  In the wood industry, it is common practice to compare in terms of the ratio of the same strength properties for lumber of two different dimensions, grades or species. Because United States lumber standards are given in terms of population fifth percentile, and strength problems arised from the weaker fifth percentile rather than the stronger mean, the ratio should be expressed in terms of the fifth percentiles rather than the means of two strength distributions. Percentiles are estimated by order statistics. This paper uses small samples and large samples to construct Jackknife1 and Jackknife2 confidence intervals with covergage rate 1 
Keywords:  Strength of lumber, Ratio of percentiles, Jackknife1 method, Jackknife2 method, Confidence region. 
JEL:  C14 C15 
URL:  http://d.repec.org/n?u=RePEc:sek:iacpro:2704968&r=all 