New Economics Papers
on Law and Economics
Issue of 2006‒06‒03
twenty-two papers chosen by
Jeong-Joon Lee, Towson University

  2. An International Multi-Level System of Competition Laws: Federalism in Antitrust By Wolfgang Kerber
  3. The Normativity of Law in Law and Economics By Péter Cserne
  4. Procedural Justice By Lawrence Solum
  5. American Insurance Association v. Garamendi and Executive Preemption in Foreign Affairs By Brannon Denning; Michael Ramsey
  6. Judicial Selection: Ideology versus Character By Lawrence Solum
  7. Against 'Individual Risk': A Sympathetic Critique of Risk Assessment By Matthew Adler
  8. Fear Assessment: Cost-Benefit Analysis and the Pricing of Fear and Anxiety By Matthew Adler
  9. Does Criminal Law Deter? A Behavioral Science Investigation By Paul Robinson
  10. Criminal Law Scholarship: Three Illusions By Paul Robinson
  11. Legal Protection for Conversational and Communication Privacy in Family, Marriage and Domestic Disputes: An Examination Federal and State Wiretap and Stored Communications Acts and the Common Law Privacy Intrusion Tort By Richard Turkington
  12. Trust, Honesty, and Corruption: Reflection on the State-Building Process By Susan Rose-Ackerman
  13. Sales and Elections as Methods for Transferring Corporate Control By Ronald Gilson; Alan Schwartz
  14. Construction Contracts (or How to Get the Right Building at the Right Price?) By Surajeet Chakravarty; W. Bentley MacLeod
  15. Financial Systems and Economic Growth: An Evaluation Framework for Policy By Iris Claus; Veronica Jacobsen; Brock Jera
  16. Institutions, Firms and Economic Growth By Jane Frances
  17. Fair Housing Enforcement and Changes in Discrimination between 1989 and 2000: An Exploratory Study By Stephen L. Ross; George C. Galster
  18. Property Condition Disclosure Law: Does 'Seller Tell All' Matter in Property Values? By Anupam Nanda
  19. The Complexity of Corruption: Nature and Ethical Suggestions By Reyes Calderón; José Luis Alvarez
  20. The Judge as a Fly on the Wall: Interpretive Lessons from Positive Theories of Communication and Legislation By Cheryl Boudreau; Arthur Lupia; Mathew D. McCubbins; Daniel B. Rodriguez
  21. EU Merger Remedies: A Preliminary Empirical Assessment By Tomaso Duso; Klaus Gugler; Burcin Yurtoglu
  22. The Effect of Reputation on Selling Prices in Auctions By Oliver Gürtler; Christian Grund

  1. By: Mariusz Golecki (University of Lódz)
    Abstract: Economics of law is thought to be a relatively new discipline. As a matter of fact it seems to be a Herculean effort to explain what economics of law in reality is. Some authors draw attention to its origins rooted deeply in a well known article of Ronald Coase and his theorem. It is Coase who has shown how much economy depends on sound legal system, especially on acknowledgement of private rights and liabilities. Others regard Garry Beckers’ efforts to provide a solid and objective basis for social theory and legal reforms as the origins of contemporary law and economics. In this essay I will claim that methodology of law and economics should be changed from implementing price theory and welfare economics (economisation of law) into the interdisciplinary project embracing jurisprudence. The basis for such project is purported by Coase theorem, but may also be found in the writings of Hayek and “old institutionalist”, such as Veblen, Hale and Commons. These efforts are visible as far as new institutional economics and transaction cost economics are concerned. The aim of this essay is thus threefold: firstly, to present briefly the most powerful and popular approach to economics of law, being at the same time influential legal theory as presented by the Chicago school, predominantly by Judge Richard Posner, and to point out limits of this approach from jurisprudential, economic and methodological point of view. The second aim is to analyse the existing alternative approaches to economics of law, related to Austrian school (Hayek), “old institutional” economics (Commons) and transaction cost economics (Coase) as well as the social systems theory (Pearsons, Luhman and Teubner). The first three theories I call foundationalist because they regard law as a foundation of economic order. Foundationalism also seems to admit the existence of the universally accepted foundations of law as well as economy regarded as human activity concentrated on managing of resources. The last theory, namely the system theory emphasising autonomy of both economy and law as social systems, is thus antifoundationalist. This division seems to be significant in the context of the present discussion within jurisprudence, especially concerning the difference between modern and post-modern legal theories. The third objective is to present an alternative point of view on both economy and law from jurisprudential perspective. I would claim that only an interdisciplinary project on law and economics is apt to supersede the dichotomy in contemporary jurisprudence between foundationalism and antifoundationalism. In this paper the term economics of law will be used in the same meaning as the majority of scholars use the term law and economics. I would like however to avoid the association of this term with the theory purported by Posner. Therefore, Posner’s approach will be named economic analysis of law. Economics of law as well as law and economics have certainly a broader meaning. The meaning is associated with a methodological approach - the economic analysis of law as well as the revision within economics itself. I prefer the name economics of law to law and economics because it seems more realistic at the moment - the insight of law in economics is either poor or redefined in economic terms. The impact of economics on law is enormous and a realistic approach cannot neglect this fact. At the same time, while the impact of law on economy is essential, it is not, however, reflected in theory. I use the term jurisprudence when referring to general reflection upon law and justice. The philosophy of law is its synonym. The particular type of reflection within jurisprudence I call jurisprudential theory.
  2. By: Wolfgang Kerber (Philipps-Universität Marburg)
    Abstract: Since the 1990s, an intensive discussion on the necessity and the potential design of international competition policy has developed. As a preliminary result, some general tendencies can be observed: Many states (including the U.S. and the EU) and most antitrust experts hold the opinion that the traditional system of national competition laws (including their extra-territorial application) is not sufficient for the protection of competition in the new millennium. Therefore, some kind of international arrangement in regard to competition rules seems to be necessary. The introduction of substantive international competition rules with an international competition authority and a corresponding court (in analogy to the supranational European competition law) is not seen as feasible and/or desirable. Thus the solution should not be sought in centralised global competition rules but be based primarily upon national competition laws and authorities. Consequently, the main thrust of the discussion has shifted from the idea of a larger harmonisation and convergence of national competition laws to the problem of better international enforcement of these laws. Although bilateral cooperation between national competition authorities have become an increasingly important issue, bilateral cooperation agreements are considered only a first step to a more preferable multilateral (or plurilateral) solution (e.g. within the WTO). Generally, the path to international competition rules is seen as a pragmatic, step-by-step approach, which can achieve its aim only in the long run. The currently favoured informal network approach, which remains without commitment and emphasizes primarily the gathering, discussion and exchange of information between national competition authorities, is in line with such a pragmatic approach to the incremental evolution of international competition rules. How can we describe the present situation from a global perspective? We have a multitude of national competition laws and enforcement agencies (competition authorities, courts) with more or less different substantive and procedural rules. Different competition laws and enforcement agencies can also exist within a (kind of) federal system, as to some degree within the U.S. and to a larger extent within the EU, where European competition rules and national competition laws coexist on two different levels. Since the competencies of these competition laws and enforcement agencies overlap, many external effects and conflicts can emerge. Up to now we cannot reasonably argue that this complex structure of competition laws forms an integrated system for protecting competition in international markets. The establishment of international competition rules (as well as the less ambitious international network approach), which on one side should help to solve the problems of the current situation, can, on the other side, increase the complexity of the system, because an additional vertical regulatory level in regard to competition rules would be introduced – including new potential conflicts of competencies. But what are the long-term perspectives of this situation? What can an international system for protecting competition look like in the long run? Two basic perspectives can be outlined: One perspective is that such a pragmatic approach, which fosters the discussion between different countries and their competition authorities, eventually will lead to a uniform global competition law or – at least – to a quasi-harmonisation of national competition laws. If the differences between the competition laws disappeared, many of the current problems would vanish. From this perspective, the current situation with many different competition laws on two or three different levels does constitute only an intermediate phase, which in the long run would be replaced by one quasiuniform set of global competition rules. Another perspective proceeds from the more sceptical assumption that it will not be possible for all countries to agree on one uniform set of competition rules, even in the long run. There will always be different objectives of competition laws and different theories about what competition is and what rules are necessary for the protection of competition. Therefore, the coexistence of different competition laws should be seen as a permanent feature of an international system of competition laws, implying that substantial decentralisation and variety will remain a major characteristic of such an international system, also in the long run. This paper will focus on the second perspective, which can be characterised as an evolutionary one: The objectives of competition policy in different countries might change and remain different; competition theories mightevolve through academic progress; the rules for the protection of competition might have to change due to new anticompetitive business practices or new technology (such as the Internet). From this evolutionary perspective, it is crucial that an international system for the protection of competition should also include the long-term capability of adapting quickly to new competition problems, particularly by fostering legal innovations for improving the protection of competition. One important argument for a more decentralised international system of competition laws will be that decentralisation will increase the capability of the system for innovation and learning in regard to the development of effective legal rules for the protection of competition. But what can a workable international system with different competition laws and enforcement agencies on different levels, i.e., a decentralised international system of competition laws, look like? This paper can only present some considerations about this problem. But its goal is to outline an analytical framework, which can be used for designing a workable multi-level system of competition laws. The main idea is that we should apply economic theories about federalism and the advantages and disadvantages of centralisation and decentralisation to develop arguments about the appropriate institutional structure of an international multi-level system of competition laws. The theories that are used in this paper are the economic theory of federalism, the attempts to apply the concept of federalism to legal rules as well (legal federalism), and the theories of interjurisdictional and regulatory competition. The paper is structured as follows. In section II it is shown that the present situation can be interpreted as being already rather close to a kind of threelevel system of competition laws and that many current issues in European and international competition policy can be interpreted as discussions about problems of the horizontal and vertical delimitation of competencies within such a three-level system. In the main section III an analytical framework concerning the potential advantages and disadvantages of centralisation and decentralisation of competition policy will be developed on the basis of economic theories of federalism and regulatory competition. This will include a (still incomplete) set of criteria for regulatory federalism in competition law. Some conclusions for reconstructing international competition policy as a multi-level system of competition laws are presented in section IV.
  3. By: Péter Cserne (Universität Hamburg)
    Abstract: The Normativity of Law in Law and Economics Péter Cserne* 1. Introduction This paper is about some theoretical and methodological problems of law and economics (economic analysis of law, EAL). More specifically, I will use game theoretical insights to answer the question, relevant both for law and economics and legal philosophy, how should a social scientific analysis of law account for the normativity of law (the non-instrumental reasons for rule-following) while retaining the observer's (explanatory or descriptive) perspective. The goal is to offer a constructive critique of both traditional law and economics scholarship and mainstream analytical legal philosophy (the "Jurisprudence of Orthodoxy", see Leith and Ingham, 1977) in this respect. I will try to find out what EAL has to do with the "internal aspect of law", i.e. the fact or the claim that law provides specific reasons for action, in order to successfully challenge mainstream legal theory. EAL can be conceived either as a (consequentialist) normative legal philosophy, as an explanatory/descriptive theory about law (rational choice theory applied to law) or as a set of propositions for legal reform (legal policy). In this paper I will concentrate on the second, explanatory branch. In this second sense, EAL seeks to explain, first, how law influences human behaviour by changing incentives (law as explanans) and, second, to analyse legal (and possibly non-legal) rules as the outcome of individual actions (law as explanandum). This explanatory/descriptive approach has to confront a clear and central problem, often raised as a (self)critique of standard EAL: its inability or inadequacy to deal with the internal perspective on law. In fact, even if this approach has several more or less sophisticated versions what seems to be common to all of them is to treat legal rules (rule-following) instrumentally. Thus the case of rule-guided behaviour is either included in these theories in an ad hoc manner or is missing altogether. On the other side, contemporary analytical legal philosophy which is (at least in the English-speaking world) generally considered as a branch of practical philosophy, usually treats legal rules as specific non-instrumental reasons for action. In this view, even if empirically there are different motives why people obey the law (including conformism, fear of sanctions, etc.), the nature of law is defined by this specific reason, while the further motives are not reasons in a genuine sense for compliance with the law. Now, in order to be taken seriously as an explanatory legal theory, EAL has to account for this feature i.e. that law offers reasons for action, and to answer (or at least take side in the current philosophical debate on) some fundamental questions about the normativity of law. These questions are both conceptual/analytical ('What is the conceptual difference between regularity of behaviour and rule-following?', 'What does it mean to follow a rule?') and explanatory ('Why people obey the law if they do?'). At the same time, in order to be taken seriously as sound social science, EAL has to stick to the methodological principles of rational choice theory as explanatory social science. In the following I shall enquire whether EAL can fulfil this double challenge. One consequence of these methodological principles should be emphasised right at the beginning. The normative or justificatory question, central to mainstream analytical legal philosophy conceived as a part of normative practical philosophy, 'Is there a (moral) duty to obey the law?' should remain outside the scope of this paper (and in general, explanatory/descriptive EAL). But the moral or prudential standpoint of the participants who face this question in some form should, of course, be recorded and included in the analysis as an object of explanation. To repeat, I shall be speaking about EAL throughout only in the second sense as an explanatory enterprise. As a different enterprise, it might be possible to work out a full-fledged normative legal philosophy as a version of EAL, based roughly on welfarist (consequentialist) principles, which would have to answer that justificatory question. But this prospect doesn't concern me here.1 In the last decades serious efforts have been made within rational choice theory (especially game theory) to deal with norms both as explananda and as explanantia. In these analyses norms are often denoted more specifically as 'social norms' and considered explicitly as non-legal, i.e. in contradistinction to legal norms. As it will be clear, these models are still highly relevant for my purposes. In part, but not only because the mechanisms exposed in these rational choice models are general enough to be applicable to legal rules too. My question is now, whether the incorporation of these results of rational choice theory in EAL makes it possible to approach the abovementioned basic problems of legal theory in a new way. In a broader perspective it might be possible that also the gap between explanatory social science and normative practical philosophy can be bridged via evolutionary game theory, especially the indirect evolutionary approach. The structure of the paper is the following. Section 2 presents how rule-following is modelled in standard EAL scholarship. Section 3 is about the jurisprudential meaning, importance and explanations of the normativity of law. Instead of the detailed analysis of jurisprudential and legal philosophical issues related to the normativity of law I will restrict myself to sketch the most characteristic standpoints. Section 4 overviews rational choice models of norms and normativity and discusses some features of the legal system in view of the previous insights. This section is intended to be systematic (maybe at some price of details and originality) but is evidently far from exhaustive. Section 5 concludes.
  4. By: Lawrence Solum (University of San Diego)
    Abstract: Procedural Justice offers a theory of procedural fairness for civil dispute resolution. The Article begins in Part I, Introduction, with two observations. First, the function of procedure is to particularize general substantive norms so that they can guide action. Second, the hard problem of procedural justice corresponds to the following question: How can we regard ourselves as obligated by legitimate authority to comply with a judgment that we believe (or even know) to be in error with respect to the substantive merits? This Article responds to the challenge posed by the hard question of procedural justice. That theory is developed in several stages, beginning with some preliminary questions and problems. The first question - what is procedure? - is the most difficult and requires an extensive answer: Part II, Substance and Procedure, defines the subject of the inquiry by offering a new theory of the distinction between substance and procedure that acknowledges the entanglement of the action-guiding roles of substantive and procedural rules while preserving the distinction between two ideal types of rules. Part III, The Foundations of Procedural Justice, lays out the premises of general jurisprudence that ground the theory and answers a series of objections to the notion that the search for a theory of procedural justice is a worthwhile enterprise. These two sections set the stage for the more difficult work of constructing a theory of procedural legitimacy. Part IV, Views of Procedural Justice, investigates the theories of procedural fairness found explicitly or implicitly in case law and commentary. After a preliminary inquiry that distinguishes procedural justice from other forms of justice, Part IV focuses on three models or theories. The first theory, the accuracy model, assumes that the aim of civil dispute resolution is correct application of the law to the facts. The second theory, the balancing model, assumes that the aim of civil procedure is to strike a fair balance between the costs and benefits of adjudication. The third theory, the participation model, assumes that the very idea of a correct outcome must be understood as a function of process that guarantees fair and equal participation. In Part V, The Value of Participation, the lessons learned from analysis and critique of the three models are then applied to the question whether a right of participation can be justified for reasons that are not reducible to either its effect on the accuracy or its effect on the cost of adjudication. The most important result of Part V is the Participatory Legitimacy Thesis: it is (usually) a condition for the fairness of a procedure that those who are to be finally bound shall have a reasonable opportunity to participate in the proceedings. The central normative thrust of Procedural Justice is developed in Part VI, Principles of Procedural Justice. The first principle, the Participation Principle, stipulates a minimum (and minimal) right of participation, in the form of notice and an opportunity to be heard, that must be satisfied (if feasible) in order for a procedure to be considered fair. The second principle, the Accuracy Principle, specifies the achievement of legally correct outcomes as the criterion for measuring procedural fairness, subject to four provisos, each of which sets out circumstances under which a departure from the goal of accuracy is justified by procedural fairness itself. In Part VII, The Problem of Aggregation, the Participation Principle and the Accuracy Principle are applied to the central problem of contemporary civil procedure - the aggregation of claims in mass litigation. Part VIII offers some concluding observations about the point and significance of Procedural Justice.
  5. By: Brannon Denning (Cumberland School of Law); Michael Ramsey (University of San Diego)
    Abstract: In American Insurance Association v. Garamendi, the U.S. Supreme Court invalidated California's Holocaust Victim Insurance Relief Act (HVIRA), which required insurance companies doing business in California to disclose all policies they or their affiliates sold in Europe between 1920 and 1945. According to the Court, the state's law unconstitutionally interfered with the foreign affairs power of the national government. The decision was easily overlooked in a Term filled with landmark cases dealing with affirmative action and sexual privacy. What coverage the case did receive emphasized its federalism aspects, and excited little reaction because the result seemed intuitively appropriate given the federal government's interest in conducting foreign affairs. We argue in this paper, however, that Garamendi is more important - and problematic - when seen as a case about separation of powers. In particular, we argue that the decision expands presidential control over foreign affairs, not only at the expense of the states, but also and more critically at the expense of Congress and the Senate. This arises from the Court's invention of a novel constitutional power of executive preemption - that is, an independent ability of the President to override state laws that interfere with executive branch policies in foreign affairs. Until Garamendi, no one had thought that a mere executive branch policy, unsupported by the formal or even tacit approval of any other branch, could have the effect of preemptive law. As a result, one need not be a defender of foreign policy federalism, nor a critic of executive foreign affairs powers, to have grave reservations about the decision's implications for separation of powers, federalism and constitutional theory. It is uncontroversial that state laws and policies must give way to the foreign affairs objectives of the national government. The critical question, though, is how these overriding federal goals are developed and identified. We argue that the Garamendi decision has at least three separate and substantial ill-effects upon this process. First,executive preemption conveys to the President the power to decide which state laws affecting foreign affairs survive and which do not. This concentrates foreign affairs power in the President in a way not contemplated by the Constitution's Framers, who sought to separate executive power from legislative power. Second, Garamendi seemed to make executive agreements the functional equivalents of congressional statutes; this functional equivalency may hasten the decline of the treaty as a foreign policy-making tool, with a concomitant decline in the opportunities for Congress - the Senate, in particular - to shape foreign policy. Third, the decision implicated the relationship between the states and the federal government in foreign affairs, but did so in a way that provided essentially no guidance for the future. Part I of this Article discusses the factual setting of the Holocaust insurance claims that formed the background of the case. Part II outlines the constitutional law of federal-state relations in foreign affairs as it stood before the Garamendi decision. Part III describes the Supreme Court's decision, and points out its discontinuity with prior decisions. In Part IV we turn to the troubling structural implications of Garamendi, which we regard as occurring primarily in the field of separation of powers. We conclude that the Court ended up far from the text, structure and history of the Constitution. In Part V we address the decision's implications for federalism, particular the dangers of concentrating preemptive power in the executive branch. Part VI relates the Garamendi case to the wider theoretical debates of modern foreign affairs law and constitutional interpretation. In contrast to other federalism and separation of powers cases, the Garamendi Court paid little attention to text or structure in analyzing the constitutional questions presented. More surprising, perhaps, is the Court's complete lack of interest in what light history might shed on the foreign affairs issues before it. But neither is Garamendi an exercise in common law doctrinal evolution, because it owes essentially nothing to prior cases or practice, except as rhetorical cover. Garamendi's near-exclusive attention to loose interpretations of prior case law and its lack of sensitivity to text, history, and structure, suggest to us a danger in common law constitutional interpretation as a preferred approach to constitutional interpretation and adjudication in foreign affairs controversies.
  6. By: Lawrence Solum (University of San Diego)
    Abstract: Part I of Judicial Selection: Ideology versus Character sets the stage for an argument that character and not political ideology should be the primary factor in the selection of judges. Political ideology has played an important role in judicial selection, from John Adams's entrenchment of federalists as judges after the election of 1800 to the Roosevelt's selection of progressives, liberals, and New Dealers, the contemporary era, from the failed nominations of Fortas, Haynsworth, Carswell to the defeat of Robert Bork, the narrow confirmation of Clarence Thomas. But until recently, political ideology has played its role behind the scenes - mostly off the official record of the judicial nomination and confirmation process. Perhaps the most important evidence of the new emphasis on political ideology in judicial selection is Senator Charles Schumer's op/ed Judging by Ideology, which argued for the proposition that political ideology and not character or competence should be the explicit on-the-record basis for Democratic opposition to Republican judicial nominees. Part II investigates the case for the ideological selection of judges. This investigation begins with Senator Schumer's argument for explicit consideration of political ideology in the confirmation process and then proceeds to the development of a two dimensional model of judicial attitudes. The first dimension is a simple left-right measure of political ideology. The second dimension represents judicial philosophy as a position on a continuous real line, the origin of which is perfect instrumentalism (decisions are entirely a function of ideology) and the endpoint of which is perfect formalism (decisions are entirely a function of the legal materials). Given a scenario in which Democrats can block Republican nominees (or vice versa), the simple model yields a confirmation space, defined as the set of judges whose position in the two-dimensional attitude space are acceptable to both parties. Part III presents the case for the primacy of character in judicial selection. The argument begins with the uncontroversial observation that almost every theorist of judicial decision can accept a thin theory of judicial vice. No one believes that cowardly, stupid, foolish, or corrupt characters are suitable for the position of judge. The next move is to argue that similar agreement can be reached on a thin theory of judicial virtue, the characteristics of mind and will that are necessary for excellent judging given any reasonable theory as to what constitutes a good judicial decision. Part IV moves beyond a theory of judicial virtue by investigating the particular virtue of justice. The paper argues that justice is best understood as lawfulness. A good judge is nominos; she grasps and respects the nomos, the laws, norms, and customs generally accepted by her community. Part V answers a series of objections to character-driven judicial selection. These include the objections (1) that judicial selectors lack sufficient evidence of character, (2) that there are no objective criteria for good character, (3) that character is a private matter, and (4) that selection on the basis of character is not politically feasible. In each case, the objection, while it might be apropos of some character-driven theory of judicial selection, is inapplicable to the kind of aretaic theory developed in Parts III and IV of the paper. Part V concludes by noting that when ideological struggle is intense, nonideological judging becomes all the more necessary to realize the rule of law.
  7. By: Matthew Adler (University of Pennsylvania Law School)
    Abstract: "Individual risk" currently plays a major role in risk assessment and in the regulatory practices of the health and safety agencies that employ risk assessment, such as EPA, FDA, OSHA, NRC, CPSC, and others. Risk assessors use the term "population risk" to mean the number of deaths caused by some hazard. By contrast, "individual risk" is the incremental probability of death that the hazard imposes on some particular person. Regulatory decision procedures keyed to individual risk are widespread. This is true both for the regulation of toxic chemicals (the heartland of risk assessment), and for other health hazards, such as radiation and pathogens; and regulatory agencies are now beginning to employ individual risk criteria for evaluating safety threats, such as occupational injuries. Sometimes, agencies look to the risk imposed on the maximally exposed individual; in other contexts, the regulatory focus is on the average individual's risk, or perhaps the risk of a person incurring an above-average but nonmaximal exposure. Sometimes, agencies seek to regulate hazards so as to reduce the individual risk level (to the maximally exposed, high-end, or average individual) below 1 in 1 million. Sometimes, instead, a risk level of 1 in 100,000 or 1 in 10,000 or even 1 in 1000 is seen as de minimis. In short, the construct of individual risk plays a variety of decisional roles, but the construct itself is quite pervasive. This Article launches a systematic critique of agency decisionmaking keyed to individual risk. Part I unpacks the construct, and shows how it invokes a frequentist rather than Bayesian conception of probability. Part II surveys agency practice, describing the wide range of regulatory contexts where individual risk levels are wholly or partly determinative of agency choice: these include most of the EPA's major programs for regulating toxins (air pollutants under the Clean Air Act, water pollutants under the Clean Water Act and Safe Drinking Water Act, toxic waste dumps under the Superfund statute, hazardous wastes under RCRA, and pesticides under FIFRA) as well as the FDA's regulation of food safety, OSHA regulation of workplace health and safety risks, NRC licensing of nuclear reactors, and the CPSC's regulation of risky consumer products. In the remainder of the Article, I demonstrate that frequentist individual risk is a problematic basis for regulatory choice, across a range of moral views. Part III focuses on welfare consequentialism: the moral view underlying welfare economics and cost-benefit analysis. I argue that the sort of risk relevant to welfare consequentialism is Bayesian, not frequentist. Part IV explores the subtle, but crucial difference between frequentist and Bayesian risk. Part V moves beyond cost-benefit analysis and examines nonwelfarist moral views: specifically, safety-focused, deontological, contractualist, and democratic views. Here too, I suggest, regulatory reliance on frequentist individual risk should be seen as problematic. Part VI argues that current practices (as described at length in Part II) are doubly misguided: not only do they focus on frequentist rather than Bayesian risk, but they are also insensitive to population size. In short, the Article provides a wide ranging, critical analysis of contemporary risk assessment and risk regulation. The perspective offered here is that of the sympathetic critic. Risk assessment itself - the enterprise of quantifying health and safety threats - represents a great leap forward for public rationality, and should not be abandoned. Rather, the current conception of risk assessment needs to be reworked. Risk needs to be seen in Bayesian rather than frequentist terms. And regulatory choice procedures must be driven by population risk or some other measure of the seriousness of health and safety hazards that is sensitive to the size of the exposed population - not the risk that some particular person (whatever her place in the exposure distribution) incurs.
    Keywords: Risk,
  8. By: Matthew Adler (University of Pennsylvania Law School)
    Abstract: Risk assessment is now a common feature of regulatory practice, but fear assessment is not. In particular, environmental, health and safety agencies such as EPA, FDA, OSHA, NHTSA, and CPSC, commonly count death, illness and injury as costs for purposes of cost-benefit analysis, but almost never incorporate fear, anxiety or other welfare-reducing mental states into the analysis. This is puzzling, since fear and anxiety are welfare setbacks, and since the very hazards regulated by these agencies - air or water pollutants, toxic waste dumps, food additives and contaminants, workplace toxins and safety threats, automobiles, dangerous consumer products, radiation, and so on - are often the focus of popular fears. Even more puzzling is the virtual absence of economics scholarship on the pricing of fear and anxiety, by contrast with the vast literature in environmental economics on pricing other intangible benefits such as the existence of species, wilderness preservation, the enjoyment of hunters and fishermen, and good visibility, and the large literature in health economics on pricing health states.This Article makes the case for fear assessment, and explains in detail how fear and anxiety should be quantified and monetized as part of a formal, regulatory cost-benefit analysis. I propose, concretely, that the methodology currently used to quantify and monetize light physical morbidities, such as headaches, coughs, sneezes, nausea, or shortness of breath, should be extended to fear. The change in total fear-days resulting from regulatory intervention to remove or mitigate some hazard - like the change in total headache-days, cough-days, etc. - should be predicted and then monetized at a standard dollar cost per fear-day determined using contingent-valuation interviews.Part I of the Article rebuts various objections to fear assessment. Deliberation costs are a worry, but this is not unique to fear, and can be handled through threshold rules specifying when the expected benefits of fear assessment appear to outweigh the incremental deliberation costs of quantifying and monetizing fear. Other objections reduce to the deliberation-cost objection, or are misconceived: irrational as well as rational fears are real harms for those who experience them; fear can be quantified; worries about uncertainty and causality reduce to deliberation costs; the possibility of reducing fear through information rather than prescription means that agencies should look at a wider range of policy options, not that they should evaluate options without considering fear costs; the concern that the very practice of fear assessment will on balance, increase fear by creating stronger incentives for fear entrepreneurs seems overblown; and fear is a welfare setback, whether or not it flows from political views.Part II argues for what I call the unbundled valuation of fear. A given hazard may cause a package of harms for each individual exposed to it: the fear of the hazard, an incremental risk of death, and perhaps other harms. Why not, then, price the harms as a package? In particular, why not predict the deaths caused by a given hazard and avoided by regulatory intervention, and multiply those deaths by a value of statistical life (VOSL) number designed to incorporate a fear premium? For reasons explored in Part II, the bundled valuation of fear and death, through fear premia attached to VOSLs or in other ways, is misguided. Rather, these two kinds of harms should be separately quantified and monetized.Parts III and IV consider how agencies should monetize fear states. How should the price of a fear-day be determined? Part III argues that contingent-valuation techniques are more appropriate, here, than revealed-preference techniques. Part IV discusses the design of contingent-valuation interviews for pricing fear. The instrumental as well as intrinsic costs of fear may need to be accounted for; the respondents to these interviews should, optimally, be calm rather than fearful; and interviews designed to secure a QALY valuation of fear can also be useful, but only if the QALY scale is understood as a welfare scale and calibrated in an inclusive way.In arguing for fear assessment as a component of cost-benefit analysis, this Article contributes to the literature on cost-benefit analysis and also stakes out a novel position in scholarly debates about risk regulation. One standard view (call it the simple technocratic view) argues that popular fear should play no role in determining regulatory choice; instead, regulators should focus on minimizing or achieving technologically feasible or cost-justified reductions in death, illness and injury. A standard opposing view (call it the populist view) is that popular perceptions of the riskiness of hazards, in turn substantially influenced by how feared or dreaded the hazards are, should be determinative. The account presented in this Article is technocratic, not populist; risk regulators should seek to maximize social welfare, and cost-benefit analysis is a technocratic tool for doing just that. Yet technocratic risk regulation need not focus narrowly on mortality and morbidity. It should focus (prima facie) on all constituents of welfare, including fear and anxiety. Although popularly perceived risk should not determine risk regulation, since the fear and anxiety that drives popular risk perception is simply one welfare impact among the multitude of costs and benefits flowing from hazards, neither should risk regulation reduce to counting deaths or injuries - to a crude minimization of physical impacts or a simplistic balancing in which death- and injury-reduction are the sole regulatory benefits that are seen to counterbalance compliance costs.
    Keywords: risk, fear, health, safety, environment,
  9. By: Paul Robinson (University of Pennsylvania Law School)
    Abstract: Does criminal law deter? Given available behavioral science data, the short answer is: generally, no. Having a criminal justice system that imposes liability and punishment for violations deters.1 Allocation of police resources or the use of enforcement methods that dramatically increase the capture rate can deter. But criminal law the substantive rules governing the distribution of criminal liability and punishment does not materially effect deterrence, we will argue, contrary to what law- and policy-makers have assumed for decades. Our claim is not that criminal law formulation can never influence behavior but rather that the conditions under which it can do so are not typical. By contrast, criminal law makers and adjudicators formulate and apply criminal law rules on the assumption that they nearly always influence conduct. And it is that working assumption that we find so disturbing and so dangerous. Our skepticism of criminal law's deterrent effect is derived in large part from a behavioral science research critique of the alleged path of influence from doctrine to behavioral response. That critique finds that the transmission of influence faces so many hurdles and is so unlikely to clear them all that it will be the unusual instance in which the doctrine can ultimately influence conduct. Yet this is a startling conclusion because it contradicts the common wisdom and standard practice of law makers and scholars. If, as appears to be the case, doctrinal formulation does not affect conduct, then most of the criminal analysis of the past forty years has been misguided. Where doctrine has been formulated to maximize deterrence, overriding other goals, such as doing justice, such deterrence analysis has frustrated those other goals for no apparent benefit. Let us briefly sketch our line of argument: The behavioral sciences increasingly call into question the assumption of criminal law's ex ante influence on conduct. Potential offenders commonly do not know the legal rules, either directly or indirectly, even those rules that have been explicitly formulated to produce a behavioral effect. Even if they know the rules, the cost-benefit analysis potential offenders perceive which is the only cost-benefit analysis that matters commonly leads to a conclusion suggesting violation rather than compliance, either because the perceived likelihood of punishment is so small, or because it is so distant as to be highly discounted, or for a variety of other or a combination of reasons. And, even if they know the legal rules and perceive a cost-benefit analysis that urges compliance, potential offenders commonly cannot or will not bring such knowledge to bear to guide their conduct in their own best interests, such failure stemming from a variety of social, situational, or chemical influences. Even if no one of these three hurdles is fatal to law's behavioral influence, their cumulative effect typically is. Part I reviews the behavioral science evidence. But some might argue that, although a behavioral science analysis of criminal law's action path says doctrinal formulation can rarely influence conduct, it might in fact 3 do so in some mysterious way presently beyond the understanding of human knowledge. We can test this argument by looking at the effect of specific doctrinal formulations on the crime rates they are intended to lower. The available studies of what one might call 'aggregated effects' -- that is, studies that do not concern themselves with how a deterrent effect might come about but look strictly to whether an effect of doctrine on crime rate can be found -- seem consistent with our conclusion above. A majority of these studies find no discernible deterrent effect of doctrinal formulation, which does not surprise us. But others claim to find such an effect and we must explain these results. Even if the mechanism of transmission from doctrinal formulation to behavioral influence is unknown, the finding of such a connection may be inconsistent with some of our claims and must be dealt with, especially since many deterrence advocates will speculate that the causal mechanism in the 'black box' is deterrence. We find that some of the aggregated-effect studies are simply poorly done and cannot reliably support a conclusion that doctrine affects crime rates. Others seem undeniably to have found an effect on crime rate, but we suspect that much if not most of this is the result of incapacitative rather than deterrent effects. Increasing prison terms, for example, could be taken as providing a greater deterrent threat, but a resulting reduction in crime may be the result of the isolating effect of longer incarcerations rather than their deterrent effect. But even if one concludes that some of these studies show a deterrent effect from doctrinal formulation, which we do, the specific circumstances of those studies serve generally to affirm our points about the prerequisites of deterrence. That is, these studies involve rules and target audiences that do what is rarely done: to satisfy the prerequisites to deterrence. The circumstances of these studies only serve to illustrate that the existence of such prerequisites are not typical. Part II reviews these aggregated effect studies.
  10. By: Paul Robinson (University of Pennsylvania Law School)
    Abstract: The paper criticizes criminal law scholarship for helping to construct and failing to expose analytic structures that falsely claim a higher level of rationality and coherence than current criminal law theory deserves. It offers illustrations of three such illusions of rationality. First, it is common in criminal law discourse for scholars and judges to cite any of the standard litany of "the purposes of punishment" -- just deserts, deterrence, incapacitation of the dangerous, rehabilitation, and sometimes other purposes -- as a justification for one or another liability rule or sentencing practice. The cited "purpose" gives the rules an aura of rationality, but one that is, in large part, illusory. Without a principle defining the interrelation of the "purposes," nearly any rule can be justified by some "purpose of punishment." Thus, a decision maker can switch among distributive principles as needed to provide an apparent rationale for whichever rule the person prefers, even if that preference is not based on rational criteria. A second example is found in a central mechanism for determining an offender's blameworthiness: the use of an individualized objective standard. The widely used mechanism avoids the problems acknowledged to attend a strictly objective standard. A person's situation and capacities are central to an assessment of whether a person can be fairly blamed for a violation, and the individualized objective standard allows the decision maker to take these into account. At the same time, the mechanism avoids reversion to a completely subjective standard, which might exempt many blameworthy cases from liability. In reality, however, the mechanism only shifts the form of the problem. Codes that use the individualized objective standard fail to provide a principle by which one can determine those characteristics of an offender with which the objective standard ought to be individualized and those with which *288 it ought not. Without a governing principle, the issue again is left to the discretion of decision makers, with no guidance as to how that discretion is to be exercised. A final illusion obscures whether the criminal justice system is, in fact, in the business of "doing justice." The "criminal justice" system imposes "punishment" and encourages moral condemnation of those found "guilty" of "crimes." But while the system cultivates its doing-justice image, it increasingly shifts to a system of essentially preventive detention, where a violator's sanction is derived more from what is needed to incapacitate him from committing future offenses than to punish him for a past offense. There are great advantages to the deception, but also serious costs and inefficiencies. The paper discusses why some illusions are more objectionable than others and what the existence of such illusions says about modern criminal law scholarship. INTRODUCTION We criminal law theorists like to think that we are moving existing criminal law theory to a higher plateau of rationality, as furthering current understanding beyond that which was given to us by our criminal law theory predecessors. But the truth is that our advances in rationality often are less than they appear; they are sometimes advances only in their appearance of rationality. This article gives illustrations of three such illusions. As will become apparent, these illustrations are of different sorts: one demonstrates the inevitable limitation of any theoretical advance; one is an example of an inevitable limitation that has been unnecessarily retained over time; and one is a case of possibly cynical deception. The focus here is upon American criminal law and its development, but there is nothing special about American criminal law or theory on this point. These illusions are likely to have some form of counterpart in the criminal law theory of most legal systems.
    Keywords: Legal Scholarship,
  11. By: Richard Turkington (Villanova University School of Law)
    Abstract: In the article I examine the legality of the not uncommon practice of surreptitiously recording telephone conversations, videotaping activities and accessing e-mail or voicemail communications by parties in domestic disputes. First, I examine the important values that are implicated by such activities. These values include conversation, communication and physical privacy. Conversation (and communication) privacy are valued on both intrinsic and instrumentalist grounds. These values run into countervailing values in domestic conflict cases. These include parental autonomy in child rearing and the best interests of the child. I argue that the pervasiveness of electronic surveillance and the emerging tradition in our legal system to grant "mature minors" self-determination in respect to decisions traditionally left to parents need to count more in accommodating values in parental electronic surveillance cases. Section II examines the legality of electronic surveillance in domestic disputes under federal and state wiretap and stored communications acts and the common law privacy intrusion tort. Wiretap and stored communications acts are notorious for their lack of clarity. I endeavor in part of this section to lay foundations about the basic concepts and structure of these laws and identify areas where there is some clarity. Wiretap acts generally prohibit surreptitious electronic surveillance of conversations. However, electronic surveillance in domestic disputes may be legal if the surveillance is sanctioned under three exceptions. These are: (1) the marital conflict exception; (2) the telephone extension exception; and (3) the vicarious consent exception. I join other commentators in their criticism of the first two exceptions. The vicarious consent exception is of recent vintage and I argue that the exception ought to be junked for several reasons. These include inherent problems with the parental motive tests, the incomprehensibility of vicarious consent with the modern law of joint custody, and the non-identity of interests in parental electronic surveillance cases. I also suggest that the problems with the self-minimization role granted to parents under the vicarious consent exception is another reason to junk the defense. Access to e-mail and voicemail are regulated under federal and state stored communications acts. Unlike wiretap acts these statutes do not contain exclusionary rules and the fruits of violations of stored communications acts are still admissible in civil and criminal proceedings. Courts have construed stored communication acts to not apply to surreptitious access of e-mail and voicemail from computers in the home. In addition, silent video surveillance is not regulated under wiretap or stored communications acts. This development has elevated the role of the common law privacy intrusion tort in legal evaluation of access to e-mail in home computers and surreptitious video surveillance in the home. It is clear that surreptitious audio and video surveillance in domestic conflicts may constitute tortuous conduct even if the conduct does not violate wiretap or stored communications acts. I examine the extent to which parties may have reasonable expectations of privacy in conversations and communications within the meaning of the privacy intrusion tort. I conclude that it would be tortuous conduct for a spouse to access e-mail stored in a home computer if the e-mail is stored in a segregated account and the parties have maintained separate passwords. Much evidence that is obtained by illegal electronic surveillance maybe admissible in marriage and custody proceedings because violations of stored communications acts and the privacy intrusion tort do not provide a basis for excluding evidence in civil court proceedings. I suggest that protective orders based upon discovery rules and constitutional privacy rights may provide a way to protect privacy by excluding some communications or images from admissibility in judicial records.
  12. By: Susan Rose-Ackerman (Yale Law School)
    Abstract: Trust implies confidence, but not certainty, that some person or institution will behave in an expected way. A trusting person decides to act in spite of uncertainty about the future and doubts about the reliability of others' promises. The need for trust arises from human freedom. As Piotr Sztompka (1999: 22) writes, "facing other people we often remain in the condition of uncertainty, bafflement, and surprise."Honesty is an important substantive value with a close connection to trust. Honesty implies both truth-telling and responsible behavior that seeks to abide by the rules. One may trust another person to behave honestly, but honesty is not identical to trustworthiness. A person may be honest but incompetent and so not worthy of trust. Nevertheless, interpersonal relationships are facilitated by the belief that the other person has a moral commitment to honesty or has an incentive to tell the truth. Corruption is dishonest behavior that violates the trust placed in a public official. It involves the use of a public position for private gain.I focus on honesty and trust as they affect the functioning of the democratic state and the market. I am interested in informal interactions based on affect-based trust only insofar as they substitute for, conflict with, or complement the institutions of state and market. The relationship between informal connections and formal rules and institutions is my central concern. The institutions of interest are democratic political structures, bureaucracies, law and the courts, and market institutions.As Mark Warren points out, governments are needed in just those situations in which people cannot trust each other voluntarily to take others' interests into account. The state is a way of managing inter-personal conflicts without resorting to civil war. Yet, this task is much more manageable if the citizenry has a degree of interpersonal trust and if the state is organized so that it is trusted by its citizens, at least, along some dimensions. The state may be able to limit its regulatory reach if interpersonal trust vitiates the need for certain kinds of state action (Offe 1999). Conversely, if the state is reliable and even-handed in applying its rules, that is, if people trust it to be fair, state legitimacy is likely to be enhanced (Offe 1999, Sztompka 1999: 135-136). Thus, there are three interrelated issues. First, do trust and reliability help democracy to function, and if so, how can they be produced? Second, do democratic governments help create a society in which trustworthiness and honesty flourish? Third, given the difficulty of producing trustworthiness and honesty, how can institutional reform be used to limit the need for these virtues?This paper provides a framework for thinking about these broad questions. Section I organizes the research on trust especially as it applies to the relationship between trust and government functioning. With this background, section II discusses the mutual interaction between trust and democracy. The alternative of limiting the need for trust leads, in section III, to a discussion of corruption in government and commercial dealings. Corruption occurs when dishonest politicians and public officials help others in return for payoffs. Because their actions are illegal, they need to trust their beneficiaries not to reveal their actions. Corrupt officials are also, of course, betraying the public trust insofar as their superiors are concerned. Reforms here can involve a reorganization of government to limit the scope for lucrative discretionary actions. Conversely, one might focus on changing the attitudes of both officials and private actors so that existing discretion is exercised in a fairer and more impartial manner.This paper analyzes the interactions between trust and democracy at a general level. However, its initial aim was to provide a context for a workshop at the Collegium Budapest on honesty, trust, and corruption in post-socialist countries. My companion paper in Kyklos makes that link explicit by bringing in survey evidence on public attitudes and behavior. Here, I conclude in section IV with some thoughts on the special character of the transition process. I highlight the tensions between interpersonal trust and trust in public institutions in the context of the transition to democracy and a market economy.
  13. By: Ronald Gilson (Stanford Law School); Alan Schwartz (Yale Law School)
    Abstract: Under standard accounts of corporate governance, capital markets play a significant role in monitoring management performance and, where appropriate, replacing management whose performance does not measure up. While the concept of a market for corporate control was once controversial, now even the American Law Institute acknowledges that "transactions in control and tender offers are mechanisms through which market review of the effectiveness of management's delegated discretion can operate. Recent case law in Delaware, however, appears to have altered dramatically the mechanisms through which the market for corporate control must operate. In particular, the interaction of the poison pill and the Delaware Supreme Court's development of the legal standard governing defensive tactics in response to tender offers have resulted in a decided, but as yet unexplained, preference for control changes mediated by means of an election rather than by a market. In this paper, we begin the evaluation of the preference for elections over markets that the Delaware Supreme Court has not yet attempted. We apply to this effort both doctrinal and insights derived from an interesting but complex formal literature that has developed to understand how voting structures work in political contests and jury deliberations. Since these contexts differ substantially from transfers of corporate control, our analysis raises a question of fit: are voting models suitable for analyzing the question asked here? In our view, the models do illuminate the takeover institution, but if this view is ultimately rejected, then we will have eliminated what at least superficially appears to be a useful set of tools.Part 1 provides a very brief account of the doctrinal development that has given us the current bias for elections, focusing on the last step in the process: the Delaware Supreme Court's decision in Unitrin, Inc. v. American General Corp. Part 2 then argues that economic efficiency, to be made precise in this context below, is the appropriate normative criterion for directing the choice between markets and elections as mechanisms for effecting a change in control that is resisted by management. Parts 3 and 4 next develop two models which show that elections can perform badly in proxy contests in which the principal issue is whether the target company should be sold or not. The first model assumes that shareholder voters are well informed about the economic variables of interest and the second supposes uncertainty about these variables.Market sales apparently lack the defects that these models show can affect elections. Current regulation, which facilitates competing bids, and current takeover technologies, which permit making them, would eliminate much of the inefficiency in takeover bidding that prior models have identified if bidders could make proposals directly to target shareholders. Then the target would be an auction seller. A standard result in auction theory is that if the seller chooses a revenue maximizing auction form it is a dominant strategy for bidders -- here potential acquirers -- to big their true valuations. The dominant strategy for a maximizing seller then is to accept the winning bid. Therefore, target shareholders would not be in a strategic situation in an auction world. As a consequence, we focus on the possible inefficiencies arising from a judicial preference for elections (in which it is optimal for shareholders to act strategically) over markets as a takeover mechanism. In Part 5, we return to doctrine to show how Unitrin's preference for elections over markets may be eliminated without requiring the Delaware Supreme Court to confess error. We also suggest that, for jurisdictions with courts less influential than those in Delaware, a statutory change to permit more sales of control would be best.
  14. By: Surajeet Chakravarty; W. Bentley MacLeod
    Abstract: Most contracts that individuals enter into are not written from scratch; rather, they depend upon forms and terms that have been successful in the past. In this paper, we study the structure of form construction contracts published by the American Institute of Architects (AIA). We show that these contracts are an efficient solution to the problem of procuring large, complex projects when unforeseen contingencies are inevitable. This is achieved by carefully structuring the ex post bargaining game between the Principal and the Agent. The optimal mechanism corresponding to the AIA construction form is consistent with decisions of the courts in several prominent but controversial cases, and hence it provides an economic foundation for a number of the common-law excuses from performance. Finally, the case of form contracts for construction is an example of how markets, as opposed to private negotiations, can be used to determine efficient contract terms.
    JEL: D80 K20 L70
    Date: 2006
  15. By: Iris Claus; Veronica Jacobsen; Brock Jera (New Zealand Treasury)
    Abstract: The purpose of this paper is to develop an analytical framework for discussing the link between financial systems and economic growth. Financial systems help overcome an information asymmetry between borrowers and lenders. If they do not function well, economic growth will be negatively affected. Three policy implications follow. First, the analysis underscores the importance of maintaining solid legal foundations because the financial system relies on these. Second, it demonstrates the necessity for reforming tax policy as it applies to investment, as this is demonstrated to significantly affect the operation of the financial system. Finally, given the importance of financial development for economic growth, a more in-depth review of New Zealand’s financial system in the context of financial regulation and supervision would be valuable.
    Keywords: Economic growth; financial development; financial systems; financial regulation; legal system; institutions; tax
    JEL: G10 G20 G38 H25 K20 K34 O16
    Date: 2004–09
  16. By: Jane Frances (New Zealand Treasury)
    Abstract: This paper reviews the literature on institutions and explores the ways in which institutions can influence economic growth, with a particular focus on how institutions affect the use that firms make of human capital to improve their productivity. It discusses the influence of underlying institutions, such as law and order and secure property rights, on the general environment within which the economic activities of production and exchange takes place. It also explores the influence of activity-specific institutions, such as labour market institutions, on firm decisions about resource use and innovation and through these on economic activity and economic growth.
    Keywords: institutions; human capital; regulation; norms; firms; economic growth; New Zealand
    JEL: D00 D20 J24 K00 L51 O40 P00 Z13
    Date: 2004–09
  17. By: Stephen L. Ross (University of Connecticut); George C. Galster (Wayne State University)
    Abstract: Using paired testing data from the 1989 and 2000 Housing Discrimination Studies (HDS) and data on fair housing enforcement activities during the 1990s in the corresponding metro areas, we investigate whether 1989-2000 changes in the metropolitan incidence of racial/ethnic discrimination correlate with fair housing enforcement activity during the 1990s. We found that higher amounts of state and local enforcement activity supported by HUD through its FHIP and FHAP programs (especially the amount of dollars awarded by the courts) were consistently associated with greater declines in discrimination against black apartment-seekers and home-seekers. The evidence does not support similar conclusions for housing market discrimination against Hispanics where the level of enforcement is much lower.
    Keywords: Housing Discrimination, Fair Housing Enforcement, and Paired Testing
    JEL: J15 K42 L85 R30
    Date: 2005–05
  18. By: Anupam Nanda (University of Connecticut)
    Abstract: At the time when at least two-thirds of the US states have already mandated some form of seller's property condition disclosure statement and there is a movement in this direction nationally, this paper examines the impact of seller's property condition disclosure law on the residential real estate values, the information asymmetry in housing transactions and shift of risk from buyers and brokers to the sellers, and attempts to ascertain the factors that lead to adoption of the disclosur law. The analytical structure employs parametric panel data models, semi-parametric propensity score matching models, and an event study framework using a unique set of economic and institutional attributes for a quarterly panel of 291 US Metropolitan Statistical Areas (MSAs) and 50 US States spanning 21 years from 1984 to 2004. Exploiting the MSA level variation in house prices, the study finds that the average seller may be able to fetch a higher price (about three to four percent) for the house if she furnishes a state-mandated seller's property condition disclosure statement to the buyer. The proportional hazard analysis of law adoption reveals that the number of disciplinary actions taken against the real estate licensees, and other institutional attributes lead to adoption of the property condition disclosure law in a state.
    Keywords: Property Condition Disclosure, Housing Price Index, Propensity Score Matching Event Study
    JEL: C14 K11 L85 R21
    Date: 2005–11
  19. By: Reyes Calderón (Facultad de Económicas, Universidad de Navarra); José Luis Alvarez (Facultad de Económicas, Universidad de Navarra)
    Abstract: Corruption is a well-established research topic which increasingly attracts interest, as attested by a growing body of literature. Nevertheless, disagreements persist not only about how to curve it, but even about its definition, causes and consequences. Such a lack of consensus reflects the complexity of the problem, a feature which is often cited but rarely analyzed. This paper aims to fill that gap. In particular, we first address the nature of corruption’s complexity by offering and analyzing an inventory of “generators of complexity” compiled from the available literature. Secondly, our paper draws from the key conclusions of that analysis to shed some light on the complex role played by corporations on corruption. Finally, we suggest that ethical aspects have to be considered in order to clarify many complex dilemmas around corruption and illuminate the corporate role in both domestic and foreign business activity.
    JEL: M21 K42
  20. By: Cheryl Boudreau (University of California, San Diego); Arthur Lupia (University of Michigan); Mathew D. McCubbins (University of California, San Diego); Daniel B. Rodriguez (University of San Diego)
    Abstract: How should judges interpret statutes? Like many others, we begin with the premise that statutory interpretation is a quest by judges to use the best available theory and information to determine “what statutes mean.” When seen in this light, two attributes of statutes merit attention. · Statutes are a form of communication. · Statutes contain a constitutionally-privileged command of the form “If you are in situation X, then you must do Y.” In other words, statutes are manufactured by a constitutionally authorized legislative body and are directed towards those who are constitutionally-obligated to implement, enforce, or follow the law. We contend that the purpose of statutory interpretation is to produce a constitutionally legitimate decoding of statutory commands in cases where the meaning of X and/or Y is contested. This perspective leads us to a unique conclusion about the conditions under which judges can use legislative records to more accurately decode a statute’s X’s and Y’s. Our attention to communication leads to these conditions because it clarifies how legislators compress the ideas in their heads, and in the collective understandings they reach, into the descriptions of X’s and Y’s that appear on statutory parchment. Many prominent claims about statutory interpretation are based on unrealistic (or unrecognizable) theories of how people decide which words to use when attempting to convey ideas to others. The consequences of proceeding in such a manner include opaque interpretative proscriptions that are difficult to apply uniformly or to reconcile with constitutional imperatives. We argue that importing a few basic scientific propositions about human communication dynamics can aid those who seek to determine what a statute’s authors meant when they chose to include (or not to include) particular words in a piece of legislation. To this end, we build from well-known communication theories. Their key insight is that successful inference about meaning requires the manner in which the communication is decoded (i.e., the expansion of the signal into information) relate to aspects of its manufacture (i.e., the compression of information into a signal) in particular ways. This insight highlights the importance for interpretive attempts of understanding the procedures by which legislators choose their words. It also provides important clues about the kinds of informational sources that can be useful to those who want to clarify the meaning of a statute’s X’s and Y’s. Better understanding the relationship between legislative rules and communicative incentives provides an improved framework for sorting credible sources of information about a statute’s meaning from sources that should be ignored. To this end, we use a positive political theory of communication-based legislative decision- making to help readers differentiate conditions for communicative sincerity from conditions for grandstanding and dissembling. The theory clarifies the conditions under which particular kinds of legislative records can be useful in decoding a statute’s X’s and Y’s (e.g., when they include detailed testimony about the meaning of an X or Y by either constitutionally empowered actors or by actors to whom constitutional authority was rightly delegated) by examining when legislative rules do, and do not, induce sincere communication. These conditions provide a template for better understanding when judges should ignore claims about a statute’s meaning and when legislative records can aid their search for meaning.
    Keywords: statutory interpretation, strategic communication
    JEL: K
    Date: 2005–10–05
  21. By: Tomaso Duso; Klaus Gugler; Burcin Yurtoglu
    Abstract: Mergers that substantially lessen competition are challenged by antitrust authorities. Instead of blocking anticompetitive transitions straight away, authorities might choose to negotiate with the merging parties and allow the transactions to proceed with modifications that restore or preserve the competition in the involved markets. We study a sample of 167 mergers that were under the European Commission’s scrutiny from 1990 to 2002. We use an event study methodology to identify the potential anticompetitive effects of mergers as well as the remedial provisions on these transactions. Stock market reactions around the day of the merger’s announcement provide information on the first question, whereas the stock market reactions around the commission’s final decision day convey information about the outcome of the bargaining process between the authority and the merging parties. We first classify mergers according to their effects on competition and then we develop hypotheses on the effects that remedies are supposed to achieve depending on the merger’s competitive outcome. We isolate several stylized facts. First, we find that remedies were not always appropriately imposed. Second, the market seems to be able to predict remedies’ effectiveness when applied in phase I. Third, the market also seems able to produce a good prior to phase II’s clearances and prohibitions, but not to remedies. This can be due either to a measurement problem or related to the increased merging firms’ bargaining power during the second phase of the merger review. <br> <br> <i>ZUSAMMENFASSUNG - (Auflagen im Fusionskontrollverfahren der EU: Eine erste empirische Bewertung) <br> Fusionen, die den Wettbewerb auf einem Markt vermindern oder verhindern, werden von Antitrustbehörden angefochten. Anstatt wettbewerbswidrige Zusammenschlüsse direkt zu blockieren, können die Behörden beschließen, mit den Parteien zu verhandeln und die Fusion mit Auflagen zu genehmigen, durch die der Wettbewerb in den entsprechenden Märkten wieder hergestellt oder aufrechterhalten wird. Wir analysieren eine Stichprobe von 167 Fusionen, die von der Europäischen Kommission zwischen 1990 und 2002 überprüft worden sind. Wir verwenden eine "event study" - Methodologie, um sowohl die möglichen wettbewerbswidrigen Wirkungen von Fusionen als auch die Wirkung der von der Behörde beschlossenen Auflagen zu untersuchen. Die Reaktion der Aktienpreise der beteiligten Unternehmen - sowohl der fusionierenden als auch der Wettbewerber - um den Tag der Fusionsankündigung liefert Informationen für die erste Frage, während die Reaktionen von Aktienpreisen um den Tag der EU-Kommissionsentscheidung Informationen über das Ergebnis der geheimen Verhandlungen zwischen der Behörde und den involvierten Parteien geben. Zuerst klassifizieren wir Fusionen entsprechend ihrer Effekte auf den Wettbewerb und dann entwickeln wir Hypothesen auf die Wirkung, welche die Auflagen in Abhängigkeit von den Wettbewerbseffekten der Fusion erzielen soll. Unsere Analyse ergibt einige stilisierte Fakten. Zuerst finden wir, dass die Auflagen von der EU-Kommission nicht immer adäquat angewandt wurden. Auflagen scheinen jedoch eine Wirkung auf die fusionierenden Unternehmen zu haben. Sie sind besonders effektiv, wenn sie bereits in Phase I des Fusionskontrollverfahrens eingesetzt werden. Jedoch scheint der Markt unfähig zu sein, eine gute Vorhersage für die Wirkung von Auflagen in Phase II zu produzieren. Dieses Ergebnis kann entweder auf einem Meßproblem beruhen oder es wird durch eine erhöhte Verhandlungsstärke der fusionierenden Unternehmen während der zweiten Phase der Fusionskontrolle verursacht.</i>
    Keywords: Merger Control, Remedies, European Commission, Event Studies.
    JEL: L4 K21 C12 C13
    Date: 2005–09
  22. By: Oliver Gürtler (Department of Economics, BWL II, University of Bonn, Adenauer-allee 24-42, D-53113 Bonn, Germany. Tel.:+49-228-739214, Fax:+49-228-739210; Christian Grund (Department of Economics and Business RWTH Aachen University Templergraben 59 D-52056 Aachen Germany Tel.:+49-241-8096381
    Abstract: In economic approaches it is often argued that reputation considerations influence the behavior of individuals or firms and that reputation influences the outcome of markets. Empirical evidence is rare though. In this contribution we argue that a positive reputation of sellers should have an effect on selling prices. Analyzing auctions of popular DVDs at eBay we, indeed, find support for this hypothesis. Secondary, we unmask the myth that it is promising for eBay sellers to let their auction end at the evening, when many potential buyers may be online.
    Keywords: Reputation, eBay feedback system, auction
    JEL: D44 D82 K12 L81
    Date: 2006–05

This issue is ©2006 by Jeong-Joon Lee. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.