Get PDF Scientific Controversies: A Socio-Historical Perspective on the Advancement of Science

Free download. Book file PDF easily for everyone and every device. You can download and read online Scientific Controversies: A Socio-Historical Perspective on the Advancement of Science file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Scientific Controversies: A Socio-Historical Perspective on the Advancement of Science book. Happy reading Scientific Controversies: A Socio-Historical Perspective on the Advancement of Science Bookeveryone. Download file Free Book PDF Scientific Controversies: A Socio-Historical Perspective on the Advancement of Science at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Scientific Controversies: A Socio-Historical Perspective on the Advancement of Science Pocket Guide.
In Scientific Controversies, Dominque Raynaud shows how organized A Socio- Historical Perspective on the Advancement of Science.
Table of contents


  • Thomas Kuhn: the man who changed the way the world looked at science;
  • Hunters Out of Space!
  • Crisis and Consensus in British Politics: From Bagehot to Blair;
  • more on this story.
  • International Review of Cytology, Vol. 114;
  • Charade.
  • ASIAN HIGHLANDS PERSPECTIVES Volume 5: A Tibetan Girls Hairchanging Ritual.

It, too, is making changes to encourage researchers to consider societal impacts: on 17 July, it published an updated version of its ethics code , last redrafted in The guidelines call on researchers to be alert to how their work can influence society, take steps to protect privacy and continually reassess technologies whose impact will change over time, such as those based in machine learning.

Brent Hecht. Credit: Thomas Mildner. The idea is not to try to predict the future, but, on the basis of the literature, to identify the expected side effects or unintended uses of this technology.

Science stories – Controversy

A sizeable population in computer science thinks that this is not our problem. But while that perspective was common ten years ago, I hear it less and less these days. They more had an issue with the mechanism. A worry was that papers might be unfairly rejected because an author and reviewer might disagree over the idea of a negative impact. There are some more difficult cases to consider — for instance, in the theory papers that are far from practice. We need to be saying, based on existing evidence, what is the confidence that a given innovation will have a side effect?

We believe that in most cases, no changes are necessary for any peer reviewers to adopt our recommendations — it is already in their existing mandate to ensure intellectual rigour in all parts of the paper. So researchers can begin to implement it immediately. But a team from the FCA is also working on more top-down ways of getting reviewers across the field to adopt the proposal, and we hope to have an announcement on this front shortly.

How will you reach them? A couple of months after we released our post, Google came out with its AI principles for research, and we were really heartened to see that those principles echo a tonne of what we put in the post. If the peer-review policy only prompts authors to discuss negative consequences, how will it improve society? Disclosing negative impacts is not just an end in itself, but a public statement of new problems that need to be solved.

Expertise. Insights. Illumination.

Two concepts of criticism are found in his works Popper , and these can be described as logical and practical senses of falsification. The logical sense of falsification is just the structure of a modus tollens argument, in which a hypothesis is falsified by the demonstration that one of its logical consequences is false.

This is one notion of criticism, but it is a matter of formal relations between statements. This is a social activity. For Popper the methodology of science is falsificationist in both its logical and practical senses, and science progresses through the demonstration by falsification of the untenability of theories and hypotheses.

The work of Mill, Peirce, and Popper is a resource for philosophers presently exploring the social dimensions of scientific knowledge. However, the current debates are framed in the context of developments in both philosophy of science and in history and social studies of science following the collapse of the logical empiricist consensus. The philosophers of the Vienna Circle are conventionally associated with an uncritical form of positivism and with the logical empiricism that replaced American pragmatism in the s and s.

According to some recent scholars, however, they saw natural science as a potent force for progressive social change. Cartwright, Cat, and Chang ; Giere and Richardson, eds. While one development of this point of view leads to scientism, the view that any meaningful question can be answered by the methods of science; another development leads to inquiry into what social conditions promote the growth of scientific knowledge. Logical empiricism, the version of Vienna Circle philosophy that developed in the United States, focused on logical, internal aspects of scientific knowledge and discouraged philosophical inquiry into the social dimensions of science.

This family of positions provoked a counter-response among philosophers. These responses are marked by an effort to acknowledge some social dimensions to scientific knowledge while at the same time maintaining its epistemological legitimacy, which they take to be undermined by the new sociology. At the same time, features of the organization of scientific inquiry compel philosophers to consider their implications for the normative analysis of scientific practices. The second half of the twentieth century saw the emergence of what has come to be known as Big Science: the organization of large numbers of scientists bringing different bodies of expertise to a common research project.

Theoretical and experimental physicists located at various sites across the country, though principally at Los Alamos, New Mexico, worked on sub-problems of the project under the overall direction of J. Robert Oppenheimer. While academic and military research have since been to some degree separated, much experimental research in physics, especially high energy particle physics, continues to be pursued by large teams of researchers. Research in other areas of science as well, for example the work comprehended under the umbrella of the Human Genome Project, has taken on some of the properties of Big Science, requiring multiple forms of expertise.

In addition to the emergence of Big Science, the transition from small scale university or even amateur science to institutionalized research with major economic impacts supported by national funding bodies and connected across international borders has seemed to call for new ethical and epistemological thinking. Moreover, the consequent dependence of research on central funding bodies and increasingly, private foundations or commercial entities, prompts questions about the degree of independence of contemporary scientific knowledge from its social and economic context.

John Hardwig articulated one philosophical dilemma posed by large teams of researchers. Each member or subgroup participating in such a project is required because each has a crucial bit of expertise not possessed by any other member or subgroup. This may be knowledge of a part of the instrumentation, the ability to perform a certain kind of calculation, the ability to make a certain kind of measurement or observation. The consequence is an experimental result, for example, the measurement of a property such as the decay rate or spin of a given particle the evidence for which is not fully understood by any single participant in the experiment.

This leads Hardwig to ask two questions, one about the evidential status of testimony, and one about the nature of the knowing subject in these cases. With respect to the latter, Hardwig says that either the group as a whole, but no single member, knows or it is possible to know vicariously.

Neither of these is palatable to him.


  1. Platos Forms in Transition: A Reading of the Parmenides.
  2. Having Epilepsy: The Experience and Control of Illness.
  3. Origin of whales.
  4. Race Is a Social Construct, Scientists Argue.
  5. An encyclopedia of philosophy articles written by professional philosophers.?
  6. Fundamentals of Evidence-Based Health Care and Translational Science.
  7. Talking about the group or the community knowing smacks of superorganisms and transcendent entities and Hardwig shrinks from that solution. Vicarious knowledge, knowing without oneself possessing the evidence for the truth of what one knows, requires, according to Hardwig, too much of a departure from our ordinary concepts of knowledge. The first question is, as Hardwig notes, part of a more general discussion about the epistemic value of testimony. Much of what passes for common knowledge is acquired from others.

    We depend on experts to tell us what is wrong or right with our appliances, our cars, our bodies. Indeed, much of what we later come to know depends on what we previously learned as children from our parents and teachers. We acquire knowledge of the world through the institutions of education, journalism, and scientific inquiry. Philosophers disagree about the status of beliefs acquired in this way. Some philosophers, as Locke and Hume seem to have, argue that only what one has observed oneself could count as a good reason for belief, and that the testimony of another is, therefore, never on its own sufficient warrant for belief.

    A number of philosophers have recently offered alternative analyses focusing on one or another element in the problem.

    Sociology of Science

    In practice, however, only some results are so checked and many are simply accepted on trust. Not only must positive results be accepted on trust, but claims of failure to replicate as well as other critiques must be also. Thus, just as in the non-scientific world information is accepted on trust, so in science, knowledge grows by depending on the testimony of others.

    What are the implications of accepting this fact for our conceptions of the reliability of scientific knowledge? The philosopher of biology, David Hull, argued in his that because the overall structure of reward and punishment in the sciences is a powerful incentive not to cheat, further epistemological analysis of the sciences is unnecessary. What scientists have to lose is their reputation, which is crucial to their access to grants, collaborations, prizes, etc.

    So the structure itself guarantees the veridicality of research reports. And, while the advocates of cold fusion were convinced that their experiments had produced the phenomenon, there have also been cases of outright fraud. Thus, even if the structure of reward and punishment is an incentive not to cheat, it does not guarantee the veridicality of every research report. Credit, or recognition, accrues to individuals to the extent they are perceived as having contributed to that community goal.

    That is, individual scientists seek reputation and recognition, to have their work cited as important and as necessary to further scientific progress. Cheating, by misreporting experimental results or other misconduct, will be punished by loss of reputation.

    Politics, science, and public attitudes: What we’re learning, and why it matters

    But this depends on strong guarantees of detection. Absent such guarantees, there is as strong an incentive to cheat, to try to obtain credit without necessarily having done the work, as not to cheat. Both Alvin Goldman Goldman, , and Philip Kitcher have treated the potential for premature, or otherwise improperly interested reporting of results to corrupt the sciences as a question to be answered by means of decision theoretic models. The decision theoretic approach to problems of trust and authority treats both credit and truth as utilities. The challenge then is to devise formulas that show that actions designed to maximize credit also maximize truth.

    Kitcher, in particular, develops formulas intended to show that even in situations peopled by non-epistemically motivated individuals that is, individuals motivated more by a desire for credit than by a desire for truth , the reward structure of the community can be organized in such a way as to maximize truth and foster scientific progress. One consequence of this approach is to treat scientific fraud and value or interest infused science as the same problem. One advantage is that it incorporates the motivation to cheat into the solution to the problem of cheating.

    But one may wonder how effective this solution really is. Increasingly, we learn of problematic behavior in science based industries, such as the pharmaceutical industry. Results are withheld or distorted, authorship is manipulated. Hot areas, such as stem cell research, cloning, or gene modification, have been subjected to fraudulent research. Thus, even if the structure of reward and punishment is an in principle incentive not to cheat, it does not guarantee the reliability of every research report. The decision theoretic model needs to include at least one more parameter, namely the anticipated likelihood of detection within a relevant timeframe.

    Community issues have also been addressed under the banners of research ethics and of peer review. One might think that the only ethical requirements on scientists are to protect their research subjects from harm and, as professional scientists, to seek truth above any other goals. This presupposes that seeking truth is a sufficient guide to scientific decision-making. Heather Douglas, in her critical study of the ideal of value-freedom Douglas , rejects this notion. Douglas draws on her earlier study of inductive risk Douglas to press the point that countless methodological decisions required in the course of carrying out a single piece of research are underdetermined by the factual elements of the situation and must be guided by an assessment of the consequences of being wrong.

    Science is not value-free, but can be protected from the deleterious effects of values if scientists take steps to mitigate the influence of inappropriate values. One step is to distinguish between direct and indirect roles of values; another is the articulation of guidelines for individual scientists. Values play a direct role when they provide direct motivation to accept or reject a theory; they play an indirect role when they play a role in evaluating the consequences of accepting or rejecting a claim, thus influencing what will count as sufficient evidence to accept or reject. The responsibility of scientists is to make sure that values do not play a direct role in their work and to be transparent about the indirect roles of values.

    Steel and Whyte examine testing guidelines developed by pharmaceutical companies to point out that the very same decision may be motivated by values playing a direct role or playing an indirect role. Elliott questions whether only harmful consequences should be considered.

    Sociology of Science

    If science is to be useful to policy makers, then questions of relative social benefit should also be permitted to play a role. This point will be pursued below. Torsten Wilholt argues that the research situation is more complicated than the epistemic vs. Most research involves both results expressed statistically which requires choice of significance threshold and balancing chances of Type I vs.

    Type II error and multiple steps each requiring methodological decisions. These decisions, Wilholt argues, represent trade-offs among the reliability of positive results, the reliability of negative results, and the power of the investigation. In making these tradeoffs, the investigator is per force guided by an evaluation of the consequences of the various possible outcomes of the study.