nep-ecm New Economics Papers
on Econometrics
Issue of 2025–05–26
24 papers chosen by
Sune Karlsson, Örebro universitet


  1. Identification and estimation of treatment effects in a linear factor model with fixed number of time periods By Koki Fusejima; Takuya Ishihara
  2. Finite-Sample Properties of Generalized Ridge Estimators for Nonlinear Models By Masamune Iwasawa
  3. Inference in High-Dimensional Panel Models: Two-Way Dependence and Unobserved Heterogeneity By Kaicheng Chen
  4. Partial Identification of Heteroskedastic Structural Vector Autoregressions: Theory and Bayesian Inference By Helmut Lütkepohl; Fei Shang; Luis Uzeda; Tomasz Woźniak
  5. Identification of Average Treatment Effects in Nonparametric Panel Models By Susan Athey; Guido Imbens
  6. Inference with few treated units By Luis Alvarez; Bruno Ferman; Kaspar W\"uthrich
  7. Real-time Program Evaluation using Anytime-valid Rank Tests By Sam van Meer; Nick W. Koning
  8. Automatic Inference for Value-Added Regressions By Tian Xie
  9. Pre-Training Estimators for Structural Models: Application to Consumer Search By Yanhao 'Max' Wei; Zhenling Jiang
  10. A test for instrumental variable validity using a correlation restriction By Ratbek Dzhumashev; Ainura Tursunalieva
  11. Empirical Bayes shrinkage (mostly) does not correct the measurement error in regression By Jiafeng Chen; Jiaying Gu; Soonwoo Kwon
  12. Regularized Generalized Covariance (RGCov) Estimator By Francesco Giancaterini; Alain Hecq; Joann Jasiak; Aryan Manafi Neyazi
  13. Large Structural VARs with Multiple Sign and Ranking Restrictions By Joshua Chan; Christian Matthes; Xuewen Yu
  14. An Axiomatic Approach to Comparing Sensitivity Parameters By Paul Diegert; Matthew A. Masten; Alexandre Poirier
  15. A Powerful Bootstrap Test of Independence in High Dimensions By Mauricio Olivares; Tomasz Olma; Daniel Wilhelm
  16. Local Projections or VARs? A Primer for Macroeconomists By Jos\'e Luis Montiel Olea; Mikkel Plagborg-M{\o}ller; Eric Qian; Christian K. Wolf
  17. Estimating the housing production function with unobserved land heterogeneity By Yusuke Adachi
  18. Policy Learning with $\alpha$-Expected Welfare By Yanqin Fan; Yuan Qi; Gaoqian Xu
  19. A Unifying Framework for Robust and Efficient Inference with Unstructured Data By Jacob Carlson; Melissa Dell
  20. On the Robustness of Mixture Models in the Presence of Hidden Markov Regimes with Covariate-Dependent Transition Probabilities By Demian Pouzo; Martin Sola; Zacharias Psaradakis
  21. Identifying the Frontier Structural Function and Bounding Mean Deviations By Dan Ben-Moshe; David Genesove
  22. Demand Estimation with Text and Image Data By Giovanni Compiani; Ilya Morozov; Stephan Seiler
  23. Identification of social effects through variations in network structures By Ryota Ishikawa
  24. A Decision-Theoretic Method for Analyzing Crossing Survival Curves in Healthcare By Appelbaum, Elie; Leshno, Moshe; Prisman, Eitan; Prisman, Eliezer, Z.

  1. By: Koki Fusejima; Takuya Ishihara
    Abstract: This paper provides a new approach for identifying and estimating the Average Treatment Effect on the Treated under a linear factor model that allows for multiple time-varying unobservables. Unlike the majority of the literature on treatment effects in linear factor models, our approach does not require the number of pre-treatment periods to go to infinity to obtain a valid estimator. Our identification approach employs a certain nonlinear transformations of the time invariant observed covariates that are sufficiently correlated with the unobserved variables. This relevance condition can be checked with the available data on pre-treatment periods by validating the correlation of the transformed covariates and the pre-treatment outcomes. Based on our identification approach, we provide an asymptotically unbiased estimator of the effect of participating in the treatment when there is only one treated unit and the number of control units is large.
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.21763
  2. By: Masamune Iwasawa
    Abstract: Parameter estimation can result in substantial mean squared error (MSE), even when consistent estimators are used and the sample size is large. This paper addresses the longstanding statistical challenge of analyzing the bias and MSE of ridge-type estimators in nonlinear models, including duration, Poisson, and multinomial choice models, where theoretical results have been scarce. Employing a finite-sample approximation technique developed in the econometrics literature, this study derives new theoretical results showing that the generalized ridge maximum likelihood estimator (MLE) achieves lower finite-sample MSE than the conventional MLE across a broad class of nonlinear models. Importantly, the analysis extends beyond parameter estimation to model-based prediction, demonstrating that the generalized ridge estimator improves predictive accuracy relative to the generic MLE for sufficiently small penalty terms, regardless of the validity of the incorporated hypotheses. Extensive simulation studies and an empirical application involving the estimation of marginal mean and quantile treatment effects further support the superior performance and practical applicability of the proposed method.
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2504.19018
  3. By: Kaicheng Chen
    Abstract: Panel data allows for the modeling of unobserved heterogeneity, significantly raising the number of nuisance parameters and making high dimensionality a practical issue. Meanwhile, temporal and cross-sectional dependence in panel data further complicates high-dimensional estimation and inference. This paper proposes a toolkit for high-dimensional panel models with large cross-sectional and time sample sizes. To reduce the dimensionality, I propose a weighted LASSO using two-way cluster-robust penalty weights. Although consistent, the convergence rate of LASSO is slow due to the cluster dependence, rendering inference challenging in general. Nevertheless, asymptotic normality can be established in a semiparametric moment-restriction model by leveraging a clustered-panel cross-fitting approach and, as a special case, in a partial linear model using the full sample. In a panel estimation of the government spending multiplier, I demonstrate how high dimensionality could be hidden and how the proposed toolkit enables flexible modeling and robust inference.
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2504.18772
  4. By: Helmut Lütkepohl; Fei Shang; Luis Uzeda; Tomasz Woźniak
    Abstract: We consider structural vector autoregressions that are identified through stochastic volatility. Our analysis focuses on whether a particular structural shock can be identified through heteroskedasticity without imposing any sign or exclusion restrictions. Three contributions emerge from our exercise: (i) a set of conditions that ensures the matrix containing structural parameters is either partially or globally unique; (ii) a shrinkage prior distribution for the conditional variance of structural shocks, centred on the hypothesis of homoskedasticity; and (iii) a statistical procedure for assessing the validity of the conditions outlined in (i). Our shrinkage prior ensures that the evidence for identifying a structural shock relies predominantly on the data and is less influenced by the prior distribution. We demonstrate the usefulness of our framework through a fiscal structural model and a series of simulation exercises.
    Keywords: Econometric and statistical methods; Fiscal policy
    JEL: C11 C12 C32 E62
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:bca:bocawp:25-14
  5. By: Susan Athey; Guido Imbens
    Abstract: This paper studies identification of average treatment effects in a panel data setting. It introduces a novel nonparametric factor model and proves identification of average treatment effects. The identification proof is based on the introduction of a consistent estimator. Underlying the proof is a result that there is a consistent estimator for the expected outcome in the absence of the treatment for each unit and time period; this result can be applied more broadly, for example in problems of decompositions of group-level differences in outcomes, such as the much-studied gender wage gap.
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.19873
  6. By: Luis Alvarez; Bruno Ferman; Kaspar W\"uthrich
    Abstract: In many causal inference applications, only one or a few units (or clusters of units) are treated. An important challenge in such settings is that standard inference methods that rely on asymptotic theory may be unreliable, even when the total number of units is large. This survey reviews and categorizes inference methods that are designed to accommodate few treated units, considering both cross-sectional and panel data methods. We discuss trade-offs and connections between different approaches. In doing so, we propose slight modifications to improve the finite-sample validity of some methods, and we also provide theoretical justifications for existing heuristic approaches that have been proposed in the literature.
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2504.19841
  7. By: Sam van Meer; Nick W. Koning
    Abstract: Counterfactual mean estimators such as difference-in-differences and synthetic control have grown into workhorse tools for program evaluation. Inference for these estimators is well-developed in settings where all post-treatment data is available at the time of analysis. However, in settings where data arrives sequentially, these tests do not permit real-time inference, as they require a pre-specified sample size T. We introduce real-time inference for program evaluation through anytime-valid rank tests. Our methodology relies on interpreting the absence of a treatment effect as exchangeability of the treatment estimates. We then convert these treatment estimates into sequential ranks, and construct optimal finite-sample valid sequential tests for exchangeability. We illustrate our methods in the context of difference-in-differences and synthetic control. In simulations, they control size even under mild exchangeability violations. While our methods suffer slight power loss at T, they allow for early rejection (before T) and preserve the ability to reject later (after T).
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2504.21595
  8. By: Tian Xie
    Abstract: It is common to use shrinkage methods such as empirical Bayes to improve estimates of teacher value-added. However, when the goal is to perform inference on coefficients in the regression of long-term outcomes on value-added, it's unclear whether shrinking the value-added estimators can help or hurt. In this paper, we consider a general class of value-added estimators and the properties of their corresponding regression coefficients. Our main finding is that regressing long-term outcomes on shrinkage estimates of value-added performs an automatic bias correction: the associated regression estimator is asymptotically unbiased, asymptotically normal, and efficient in the sense that it is asymptotically equivalent to regressing on the true (latent) value-added. Further, OLS standard errors from regressing on shrinkage estimates are consistent. As such, efficient inference is easy for practitioners to implement: simply regress outcomes on shrinkage estimates of value added.
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.19178
  9. By: Yanhao 'Max' Wei; Zhenling Jiang
    Abstract: We explore pretraining estimators for structural econometric models. The estimator is "pretrained" in the sense that the bulk of the computational cost and researcher effort occur during the construction of the estimator. Subsequent applications of the estimator to different datasets require little computational cost or researcher effort. The estimation leverages a neural net to recognize the structural model's parameter from data patterns. As an initial trial, this paper builds a pretrained estimator for a sequential search model that is known to be difficult to estimate. We evaluate the pretrained estimator on 12 real datasets. The estimation takes seconds to run and shows high accuracy. We provide the estimator at pnnehome.github.io. More generally, pretrained, off-the-shelf estimators can make structural models more accessible to researchers and practitioners.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.00526
  10. By: Ratbek Dzhumashev (Department of Economics, Monash University); Ainura Tursunalieva (Data61, CSIRO)
    Abstract: DiTraglia and García-Jimeno (2021) demonstrate that the correlation coefficients between an IV, an endogenous regressor, and the outcome variable must satisfy a specific joint constraint determined by their relationships with the structural error term. We exploit this constraint to develop a novel Correlation Restriction test that becomes feasible when the direction of endogeneity bias is known. Our test quantifies the probability of instrument orthogonality to the structural error across the plausible range of endogeneity magnitudes, providing researchers with a previously unavailable diagnostic tool in the frequentist setting. Through simulations and applications to diverse empirical settings including returns to education, criminal recidivism, and development economics, we establish that our method reliably identifies invalid instruments and characterizes the endogeneity range over which valid instruments maintain their exogeneity. This approach contributes to instrumental variable methods by transforming a key identification assumption from an untestable assertion into an empirically verifiable condition.
    Keywords: endogeneity, validity of instrumental variable, linear regression
    JEL: C18 C26 C36 C52
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:mos:moswps:2025-06
  11. By: Jiafeng Chen; Jiaying Gu; Soonwoo Kwon
    Abstract: In the value-added literature, it is often claimed that regressing on empirical Bayes shrinkage estimates corrects for the measurement error problem in linear regression. We clarify the conditions needed; we argue that these conditions are stronger than the those needed for classical measurement error correction, which we advocate for instead. Moreover, we show that the classical estimator cannot be improved without stronger assumptions. We extend these results to regressions on nonlinear transformations of the latent attribute and find generically slow minimax estimation rates.
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.19095
  12. By: Francesco Giancaterini; Alain Hecq; Joann Jasiak; Aryan Manafi Neyazi
    Abstract: We introduce a regularized Generalized Covariance (RGCov) estimator as an extension of the GCov estimator to high dimensional setting that results either from high-dimensional data or a large number of nonlinear transformations used in the objective function. The approach relies on a ridge-type regularization for high-dimensional matrix inversion in the objective function of the GCov. The RGCov estimator is consistent and asymptotically normally distributed. We provide the conditions under which it can reach semiparametric efficiency and discuss the selection of the optimal regularization parameter. We also examine the diagonal GCov estimator, which simplifies the computation of the objective function. The GCov-based specification test, and the test for nonlinear serial dependence (NLSD) are extended to the regularized RGCov specification and RNLSD tests with asymptotic Chi-square distributions. Simulation studies show that the RGCov estimator and the regularized tests perform well in the high dimensional setting. We apply the RGCov to estimate the mixed causal and noncausal VAR model of stock prices of green energy companies.
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2504.18678
  13. By: Joshua Chan; Christian Matthes; Xuewen Yu
    Abstract: Large VARs are increasingly used in structural analysis as a unified framework to study the impacts of multiple structural shocks simultaneously. However, the concurrent identification of multiple shocks using sign and ranking restrictions poses significant practical challenges to the point where existing algorithms cannot be used with such large VARs. To address this, we introduce a new numerically efficient algorithm that facilitates the estimation of impulse responses and related measures in large structural VARs identified with a large number of structural restrictions on impulse responses. The methodology is illustrated using a 35-variable VAR with over 100 sign and ranking restrictions to identify 8 structural shocks.
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.20668
  14. By: Paul Diegert; Matthew A. Masten; Alexandre Poirier
    Abstract: Many methods are available for assessing the importance of omitted variables. These methods typically make different, non-falsifiable assumptions. Hence the data alone cannot tell us which method is most appropriate. Since it is unreasonable to expect results to be robust against all possible robustness checks, researchers often use methods deemed "interpretable", a subjective criterion with no formal definition. In contrast, we develop the first formal, axiomatic framework for comparing and selecting among these methods. Our framework is analogous to the standard approach for comparing estimators based on their sampling distributions. We propose that sensitivity parameters be selected based on their covariate sampling distributions, a design distribution of parameter values induced by an assumption on how covariates are assigned to be observed or unobserved. Using this idea, we define a new concept of parameter consistency, and argue that a reasonable sensitivity parameter should be consistent. We prove that the literature's most popular approach is inconsistent, while several alternatives are consistent.
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2504.21106
  15. By: Mauricio Olivares; Tomasz Olma; Daniel Wilhelm
    Abstract: This paper proposes a nonparametric test of pairwise independence of one random variable from a large pool of other random variables. The test statistic is the maximum of several Chatterjee's rank correlations and critical values are computed via a block multiplier bootstrap. The test is shown to asymptotically control size uniformly over a large class of data-generating processes, even when the number of variables is much larger than sample size. The test is consistent against any fixed alternative. It can be combined with a stepwise procedure for selecting those variables from the pool that violate independence, while controlling the family-wise error rate. All formal results leave the dependence among variables in the pool completely unrestricted. In simulations, we find that our test is very powerful, outperforming existing tests in most scenarios considered, particularly in high dimensions and/or when the variables in the pool are dependent.
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.21715
  16. By: Jos\'e Luis Montiel Olea; Mikkel Plagborg-M{\o}ller; Eric Qian; Christian K. Wolf
    Abstract: What should applied macroeconomists know about local projection (LP) and vector autoregression (VAR) impulse response estimators? The two methods share the same estimand, but in finite samples lie on opposite ends of a bias-variance trade-off. While the low bias of LPs comes at a quite steep variance cost, this cost must be paid to achieve robust uncertainty assessments. VARs should thus only be used with long lag lengths, ensuring equivalence with LP. For LP estimation, we provide guidance on selection of lag length and controls, bias correction, and confidence interval construction.
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.17144
  17. By: Yusuke Adachi
    Abstract: This paper develops a novel method for estimating the housing production function that addresses transmission bias caused by unobserved heterogeneity in land productivity. The approach builds on the nonparametric identification strategy of Gandhi et al. (2020) and exploits the zero-profit condition to allow consistent estimation even when either capital input or housing value is unobserved, under the assumption that land productivity follows a Markov process. Monte Carlo simulations demonstrate that the estimator performs well across a variety of production technologies.
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2504.20429
  18. By: Yanqin Fan; Yuan Qi; Gaoqian Xu
    Abstract: This paper proposes an optimal policy that targets the average welfare of the worst-off $\alpha$-fraction of the post-treatment outcome distribution. We refer to this policy as the $\alpha$-Expected Welfare Maximization ($\alpha$-EWM) rule, where $\alpha \in (0, 1]$ denotes the size of the subpopulation of interest. The $\alpha$-EWM rule interpolates between the expected welfare ($\alpha=1$) and the Rawlsian welfare ($\alpha\rightarrow 0$). For $\alpha\in (0, 1)$, an $\alpha$-EWM rule can be interpreted as a distributionally robust EWM rule that allows the target population to have a different distribution than the study population. Using the dual formulation of our $\alpha$-expected welfare function, we propose a debiased estimator for the optimal policy and establish its asymptotic upper regret bounds. In addition, we develop asymptotically valid inference for the optimal welfare based on the proposed debiased estimator. We examine the finite sample performance of the debiased estimator and inference via both real and synthetic data.
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.00256
  19. By: Jacob Carlson; Melissa Dell
    Abstract: This paper presents a general framework for conducting efficient and robust inference on parameters derived from unstructured data, which include text, images, audio, and video. Economists have long incorporated data extracted from texts and images into their analyses, a practice that has accelerated with advancements in deep neural networks. However, neural networks do not generically produce unbiased predictions, potentially propagating bias to estimators that use their outputs. To address this challenge, we reframe inference with unstructured data as a missing structured data problem, where structured data are imputed from unstructured inputs using deep neural networks. This perspective allows us to apply classic results from semiparametric inference, yielding valid, efficient, and robust estimators based on unstructured data. We formalize this approach with MARS (Missing At Random Structured Data), a unifying framework that integrates and extends existing methods for debiased inference using machine learning predictions, linking them to a variety of older, familiar problems such as causal inference. We develop robust and efficient estimators for both descriptive and causal estimands and address challenges such as inference using aggregated and transformed predictions from unstructured data. Importantly, MARS applies to common empirical settings that have received limited attention in the existing literature. Finally, we reanalyze prominent studies that use unstructured data, demonstrating the practical value of MARS.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.00282
  20. By: Demian Pouzo; Martin Sola; Zacharias Psaradakis
    Abstract: This paper studies the robustness of quasi-maximum-likelihood (QML) estimation in hidden Markov models (HMMs) when the regime-switching structure is misspecified. Specifically, we examine the case where the true data-generating process features a hidden Markov regime sequence with covariate-dependent transition probabilities, but estimation proceeds under a simplified mixture model that assumes regimes are independent and identically distributed. We show that the parameters governing the conditional distribution of the observables can still be consistently estimated under this misspecification, provided certain regularity conditions hold. Our results highlight a practical benefit of using computationally simpler mixture models in settings where regime dependence is complex or difficult to model directly.
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2504.21669
  21. By: Dan Ben-Moshe; David Genesove
    Abstract: This paper analyzes a model in which an outcome variable equals the difference between a frontier function of inputs and a nonnegative unobserved deviation. If zero is in the support of the deviation at a given input value, then the frontier function is identified by the maximum outcome there. This obviates the need for instrumental variables. Implementation requires allowing for the distribution of deviations to depend on inputs, thus not ruling out endogenous inputs and ensuring the estimated frontier is not merely a constant shift of a biased conditional expectation. Including random errors results in a stochastic frontier analysis model generalized to allow the joint distribution of deviations and errors to depend on inputs. If the minimum deviation is a function of inputs, then we derive a lower bound for the mean deviation using variance and skewness, without making parametric distributional assumptions. We apply our results to a frontier production function, with deviations representing inefficiencies.
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2504.19832
  22. By: Giovanni Compiani; Ilya Morozov; Stephan Seiler
    Abstract: We propose a demand estimation method that leverages unstructured text and image data to infer substitution patterns. Using pre-trained deep learning models, we extract embeddings from product images and textual descriptions and incorporate them into a random coefficients logit model. This approach enables researchers to estimate demand even when they lack data on product attributes or when consumers value hard-to-quantify attributes, such as visual design or functional benefits. Using data from a choice experiment, we show that our approach outperforms standard attribute-based models in counterfactual predictions of consumers' second choices. We also apply it across 40 product categories on Amazon and consistently find that text and image data help identify close substitutes within each category.
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.20711
  23. By: Ryota Ishikawa (Graduate School of Economics, Waseda University)
    Abstract: Bramoull´e et al. (2009) provided identification conditions for linear social interaction models through network structures. Despite the importance of their results, the authors omitted detailed mathematical discussions. Moreover, they consider cases where many identical networks are observed simultaneously within the same dataset. In reality, multiple networks with different structures, such as classrooms or villages, are repeatedly observed within the same dataset. The purpose of this paper is to fill in the mathematical gaps in their arguments and to establish identification conditions for networks with different structures. In addition, we find the smallest network size as a necessary condition for identifying social effects. We also discuss the identification conditions of network models with a fixed network effect.
    Keywords: identification, network model, social interactions, network size
    JEL: C31 D85
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:wap:wpaper:2509
  24. By: Appelbaum, Elie; Leshno, Moshe; Prisman, Eitan; Prisman, Eliezer, Z.
    Abstract: The problem of crossing Kaplan-Meier curves has not been solved in the medical research literature to date. This paper integrates survival curve comparisons into decision theory, providing a theoretical framework and a solution to the problem of crossing Kaplan-Meier curves. The application of decision theory allows us to apply stochastic dominance concepts and risk preference attributes to compare treatments even when standard Kaplan-Meier curves cross. The paper shows that as additional risk preference attributes are adopted, Kaplan-Meier curves can be ranked under weaker restrictions, namely with higher orders of stochastic dominance. Consequently, even Kaplan-Meier curves that cross may be ranked. The method we present allows us to extract all possible information from survival functions; hence, superior treatments that cannot be identified using standard Kaplan-Meier curves may become identifiable. Our methodology is applied to two examples of published empirical medical studies. We show that treatments deemed non-comparable because their Kaplan-Meier curves intersect can be compared using our method.
    Keywords: Survival Curve Analysis; Decision Theory; Risk Preference Modelling; Stochastic Dominance; Medical Treatment Comparison; Healthcare Data Interpretation
    JEL: C18 C65 D81 I10 I12 I19
    Date: 2025–03–20
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:124419

This nep-ecm issue is ©2025 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.