Ethics Codes: History, Context, and Challenges

This document provides background on the history and development of ethics codes, focused on three fields connected to the ethical issues in big data: computing ethics, biomedical ethics and journalism ethics.

Executive Summary

This document provides background on the history and development of ethics codes, focused on three fields connected to the ethical issues in big data: computing ethics, biomedical ethics and journalism ethics. It considers how codes were developed to guide research practice and shape professional obligations. We note that the ACM and the IEEE both have ethics guidelines that are over 20 years old, before the popularization of the internet and challenges that come with big data research. This survey of ethics codes is not an exhaustive look at scholarship about bioethics, computing ethics or journalism ethics, but is designed to prompt the Council to think about how ‘data ethics’ processes could be established for NSF projects. Could a ‘data ethics plan’ be built into grant applications, similar to the existing requirement of a ‘data management plan’? If so, what would it address?

History and trends in ethics codes/policies

Ethics codes are written in response to contemporary conditions, and by attending to their history we can see why they became necessary and consider the need for new or revised codes. In general, we note that the most influential ethics codes are hard-won responses to major disruptions, especially medical and behavioral research scandals. Such disruptions re-open questions of responsibility, trust and institutional legitimacy, and thus call for codification of new social and political arrangements.

In the mid-20th Century there was a proliferation of important ethics codes that still guide professional behavior and research activities, even for organizations that do not conduct research. Prior to this time, there were relatively few professional ethics codes; today they are widespread and seem nearly obligatory. In the 1940-50s researchers struggled to respond to the scientific and medical atrocities of the Nazi regime. Defense attorneys in the “Nazi doctor trials” at Nuremberg argued that their clients could not be held accountable for war crimes because there were no widely recognized research ethics standards that would have prohibited their experiments. Although the doctors were still found guilty, the judges found enough merit in that claim that they offered the 10-point Nuremberg Code that set the stage for all subsequent research codes and policies (including the World Medical Association’s 1948 Geneva Declaration and 1964 Helsinki Declaration, see below).

The major social disruptions of the 1960-70s in the US and Western Europe also coincided with continued research scandals in the U.S. (e.g., Tuskegee, Willowbrook, Milgram, Stanford Prison, among others), indicating that the Nuremberg Code and subsequent codes were inadequate without more legal codification and enforcement mechanisms. Particularly in the US, the public was substantially less trusting of inherited institutional authority and the subsequent ethics codes (particularly the Belmont Report and the formation of IRBs) responded to a need for routinized skepticism and critical assessment (Cassell, 2000; Jasanoff, 2005).

There are several principles that can be found at the core of contemporary ethics codes across many domains:

  • respect for persons (autonomy, privacy, informed consent)
  • balancing of risk to individuals with benefit to society
  • careful selection of participants
  • independent review of research proposals
  • self-regulating communities of professionals
  • funding dependent on adherence to ethical standards

In biomedicine, ethics codes and policies have tended to follow scandals (aka, “tombstone policy”). For example, ethical reforms that followed the distribution of dangerous and untrustworthy medicine (e.g. sulfanilamide and thalidomide) coincided with more rigorous, standardized controls for demonstrating safety and efficaciousness standards. The formalized protocols for clinical trials are a hybrid of ethical policies and standards of evidence for efficaciousness and safety of proposed drugs. Similarly, in journalism ethics codes we can often find claims about journalistic virtue twinned with claims about the proper way of handling evidence and truth-telling. In the Society of Professional Journalists’ code (detailed below), the section “Seek Truth and Tell It” instructs journalists to, “Boldly tell the story of the diversity and magnitude of the human experience. Seek sources whose voices we seldom hear.” In the same principle we see an injunction to cultivate the virtue of “boldness” and instructions about what type of sources/evidence are necessary to conduct bold journalism.

Notably many of the most pressing ethical issues in biomedicine today are related to the rise of data-intensive medicine, such as the return of results to study participants (Fullerton et al., 2012; Applebaum et al. 2014), the inclusion of genomics results in medical records (Hazin et al., 2014), and how researchers ought to respect the rights of indigenous participants whose material and data was collected under dubious circumstances (Tallbear and Reardon, 2013; Radin, 2014). Similarly, researchers and ethicists are finding that ‘re-consent’ and/or open-ended consent models are a needed response to the new-found capability of reusing and repurposing biomedical tissues and data collected for many different purposes (Surver et al., 2013; Koenig, 2014; McGuire, 2011). In each of these cases, data-intensive research is pushing the limits of established ethics conventions, such as long-standing informed consent practices. As big data techniques allow biomedicine to draw new connections between previously disparate databases, collections and phenomena, it is reasonable to ask whether ethical conundrums may proliferate in ways that current ethics codes and practices cannot easily accommodate.[1] This is not to say that the core principles of bioethics are not adequate guides to handling data-intensive biomedicine—that remains to be seen. Rather, the institutionalized practices, policies and codes have become an object of concern and experimentation in light of big data techniques.

Ethics codes in computing have followed a somewhat different trajectory. Rather than reacting to scandals, major policies in computing ethics have presaged many of the issues that are now experienced as more urgent in the context of big data. A series of reports to Congress from 1974 to 2000 (detailed below in the examples of ethics codes) identified many of the fundamental issues currently active in big data ethics, such as the need for protection against intrusions to citizen’s privacy, risks arising from dual use of records, and the need for effective measures to correct false data. Yet despite this strong start, the major computing societies now have ethics codes that are two decades old, dating from the start of the internet age (Anderson et al., 1993; Oz, 1993). Even as early as the 1990s, critics were noting that the major ethics codes of computing societies, such as the ACM, IEEE, and DPMA (now AITP) were out of date in terms of their ability to address the quickly shifting norms and technical capacity of the Internet and data-intensive society, particularly because of the advice offered by the codes is largely generic (Martin and Martin, 1990; Oz, 1993). Many of the principles expressed by these codes, such as honesty and accuracy, apply to ethical professionals broadly. However, there is no specific reference to or guidance about the pressing challenges of the profession, such as informed consent, how to manage potential harms, the role of third parties accessing data, and the threats to privacy.

Question: Does “big data” constitute a disruption that calls for revisions of existing codes and research practices?

Journalism ethics codes differ from biomedicine and computing by virtue of the emphasis placed on individual character and independent action. Journalism and modern science co-evolved as practices of objective truth-telling (Ward, 2005). As journalism became ever more important to the rising model of liberal citizenship, journalists developed a model of professional ethics that emphasized individual virtue and service to society (Myers, 2010). This loose model of identity-based ethics can be classified as an ideology rather than a code (Deuze, 2008). Indeed, the ethics codes for journalists are comparatively thin and mostly focus on identity and character. The business model and ethics codes of professional journalism are largely built around an ethos of truth tellers serving a social good as independent actors. For example, both the Society of Professional Journalists and the U.S.-based National Union of Journalists (both detailed below) are structured around a statement that “A journalist should:” followed by a set of principles and practices. For example, the National Union of Journalists states that “A journalist: Strives to ensure that information disseminated is honestly conveyed, accurate and fair.”

Such identity-driven imperatives are not found in most professional ethics codes, which tend to assert that membership in an organization obligates members with a particular set of duties (an exception is the American Medical Association, which also emphasizes personal virtue). The rise of new models of journalism on the Internet have put pressure on the established economic and ethical model of the industry, especially by allowing many more people (and algorithms) to participate in news-making.

In conclusion, the histories of ethics codes indicate that major social and technological disruptions initiate important rounds of critical ethical reflection or reformation.

What do professional/institutional/disciplinary ethics codes attempt to accomplish?

Professional organizations that have ethics codes for members can have different purposes for those codes. In the US, four major computing professional societies have substantially different codes for their members due to their different missions (Oz, 1993). Analyses of ethics codes note a wide range of purposes for ethics codes (Frankel, 1998; Gaumintz and Lere, 2002; Kaptein and Wempe, 1998). These purposes can be classified as ‘inward facing’ and ‘outward facing’:

Inward facing goals:

  • providing guidance when existing inexplicit norms and values are not sufficient, that is guidance for a novel situation
  • reducing internal conflicts, that is, strengthen the sense of common purpose among members of the organization
  • satisfying internal criticism from members of profession
  • create generalized rules for individuals and organizations that have responsibilities for important human goods
  • establish role-specific guidelines that instantiate general principles as particular duties
  • establish standards of behavior toward colleagues, students/trainees, employees, employers, clients
  • strengthen the sense of common purpose among members of the organization
  • deter unethical behavior by identifying sanctions and by creating an environment in which reporting unethical behavior is affirmed
  • provide support for individuals when faced with pressures to behave in an unethical manner

Outward facing goals:

  • protect vulnerable populations who could be harmed by the profession’s activities
  • protect/enhance the good reputation of and trust for the profession
  • establish the profession as a distinct moral community worthy of autonomy from external control and regulation
  • provide a basis for public expectations and evaluation of the profession
  • serve as a basis for adjudicating disputes among members of the profession and between members and non-members
  • create institutions resilient in the face of external pressures
  • respond to past harms done by the profession

Frankel (1989) notes that all ethics codes serve multiple interests and therefore have multiple, sometimes conflicting, dimensions. He offers a taxonomy of aspirational, educational, and regulatory codes, with varying levels of scope and detail. Frankel argues that the process is just as important as the final product and provides opportunities for critical reflexivity: “This process of self-criticism, codification, and consciousness-raising reinforces or redefines the profession’s collective responsibility and is an important learning and maturing experience for both individual members and the profession.” Given that need for self-reflexivity it is important that ethics codes do not remain static, and perhaps specify methods and timelines for updating.

Points of leverage: where do ethics codes look for opportunities to enforce compliance? Is enforcement relevant?

Organizations, institutions and communities tend to develop methods of enforcement that reflect their mission.

Computing professional organizations have developed a range enforcement options. Those that provide certifications developed methods for revoking certifications on the basis of unethical behavior; others that have an academic membership have procedures for revoking the privileges of membership (Oz, 1993). Given that many of the ethical challenges relevant to big data are emanating from the private sector, any effort to generate an ethics code will need to consider how best to reach private actors. In some professions (e.g., US petroleum geologists) it is nearly obligatory to belong to the professional society in order to participate in the industry. There is no similar expectation for computing professionals as of yet.

In the US the biggest instrument available for enforcement is the provisioning of federal research funding, and it has been used mostly widely in biomedical and behavioral research. Following a series of highly-publicized medical and behavioral research scandals, the 1974 National Research Act and 1979 Belmont Report set a new agenda for research ethics in the US that rested on principles of beneficence, respect for persons and justice. Subsequent regulations through the Department Health and Human Services require all institutions that receive federal research funds to have an Institutional Review Board that considers most human subjects research proposals for adherence to ethical standards. Institutions without internal or outsourced review boards lose access to by far the largest source of funding for basic science and biomedical research.

The NIH and NSF have used similar tactics to require ethics education (as opposed to mere compliance) at institutions receiving their funds.[2] The responsibility for such programs have largely fallen on the shoulders of biology and medicine (in part because bioethics is such an entrenched player) but the physical sciences are starting to see similar pressures. US medical professionals and institutions face similar enforcement methods of clinical ethics through Medicaid and Medicare dollars. Because the US government is the largest medical spender it is able to use its funding streams to enforce certain standards, like health records privacy. The letter of law may not be the best way to protect a robust notion of privacy, but the threat of defunding is a substantial disincentive to violations.

Journalism ethics codes are notable for their complete lack of enforcement mechanisms. This could be due to several factors. There is arguably no central authority for journalistic membership, the profession is rather decentralized. Their ethics codes also place a very strong emphasis on personal character and independence, thus deleveraging whatever influence a professional society might have (Deuze, 2005). In could also be the case that enforcement of journalist ethics is done by social shunning and needs no formal mechanism. In other words, the threat of enforcement is implicit within their well-elaborated culture and need not be formally articulated.

Given the multiplicity of purposes fulfilled by ethics codes/policies it is worth noting that enforcement may not be as important as it first appears. Many of the purposes of professional ethics codes are positive: codes can affirmatively create a community with common values. By defining the scope of acceptable activities, establishing expectations and articulating values an ethics code defines the boundaries of a community. Given the distributed nature of some of these organizations enforcement may just be too challenging.. Anderson et al. (1993) note that this was a primary reason for the only major revision to the ethics code of the Association of Computing Machinery (ACM), when they moved to change the policy from a punitive model that created too many judicial problems to an educational/affirmative model that built professional identity.

An important consideration for the role of data ethics would be considering which organization/s represent data practitioners? Should such a code aim to function by virtue of membership in a professional society or can it target “use cases”? In some careers certain professional memberships are considered necessary for employment, but this is rarely the case for many computing careers. A plausible scenario might be a code that is managed by an independent, industry-wide association of companies/employers.

Why ethics codes now?

Looking at professional ethics codes it becomes apparent that nearly everyone has an ethics code or code of conduct. This raises a critical question: why is ethics the lingua franca of organizational and individual behavior? How do organizational/institutional ethics codes operate in relation to prominent discourses about social order?

Although in the broadest sense “ethics” can just mean acting rightly, ethics within Western philosophical tradition has a historical trajectory that delimits what counts as an ethical problem. Ethics as a philosophical discipline trends toward broad principles (which can apply everywhere) and detachment (the rules apply without regard to personal attachments). Ethics codes/policies/theories routinely adopt the universal, detached mode yet are written in response to moments of historical specificity. For the most part this makes sense—when faced with a scandal like Tuskegee, it is a perfectly reasonable response to identify inviolable, widely-applicable rules for research ethics that would have prevented those particular abuses. But detached, universal rules have a habit of becoming disassociated from the original moral urgency that led to their drafting. Ethics codes that have substantial effect on institutions can result in an infrastructure whose effective management demands more attention than the actual ethical commitments underlying the code. It is important to recognize the risk that the universalism and detachment favored by philosophical ethics can lead away from facing the most concrete ethical challenges and instead leave us with routinized obedience to an infrastructure.

Ethics codes also trend toward a focus on individual obligations. Discussions of institutional imperatives, broad social goals or collective responsibility are rarely concrete in ethics codes. Many ethics codes are structured such that broad principles are located in a preamble, introduction or list of general imperatives, and the bulk of the codes are narrower duties indexed to professional practice. For example, the ACM begins its code with a list of general moral imperatives. One imperative notes the duty for members to “be fair and take action not to discriminate” against the standard demographic list of vulnerable communities. However it does not state that computing professionals have a positive duty to understand, rectify or otherwise address longstanding discrimination. Rather, it specifies that “these ideals [to not discriminate] do not justify unauthorized use of computer resources nor do they provide an adequate basis for violation of any other ethical imperatives of this code,” marking non-discrimination as the only imperative explicitly limited by reference to other imperatives. Thus we can see that professional ethics codes risk narrowing the scope of moral inquiry to clearly delineated professional duties and neglecting broader obligations. Frankel (1989) argues this is a reason for always involving outsiders in constructing and revising professional ethics codes.

One result of this common structure of ethics codes is that greater burdens are placed on individual members to carry out the profession’s ethical agenda.[3] Although individuals should absolutely be held accountable for ethical breaches, a focus on individuals is also a potential tool for maintaining institutional power because it distributes responsibility and sanctions downwards away from the institution, organization or profession. Planning a code of ethics should include a recognition that institutionalized ethics can have the perhaps unintended conservative effect of protecting organizations from criticism and mitigating against collective responsibility or action.

Conclusion

From this comparative analysis, we see four central dimensions for the Council to consider as we discuss the role of ethics codes:

  1. Target Population: Codes of ethics can target members of professional assocations within an organization; practices/industries as a whole; or be organization-specific (with a potential certification procedure. Who are we trying to reach?
  2. Revisability: Even in fast moving fields, codes of ethics historically have tended to be established once for all and become engrained in stone (or chips). Thus the ACM code has provisions for revision and yet was last revised 1992. Are processes more pliable than principles?
  3. Universalism: Codes of conducts tend to assert universal principles. This can be a problematic discourse (consider the cultural and political intricacies of defining ‘universal human rights’). As Nissenbaum argues with respect to privacy, perhaps the best unit of analysis is information flow and we should be concerned less with static principles than with mechanisms for due process which allow adaptation to genuine novelty.
  4. Reactive vs Proactive: Historically, codes of ethics have tended to be developed in reaction to specific abuses (e.g., Nuremberg, the Belmont Report). Accordingly, they generally center around preventing abusive behavior rather than a broader, proactive goal. We currently have a good opportunity to drive a discussion about data ethics that both responds to the previous scandals and frames positive goals.

Each of these dimensions is relevant, we feel, both for the NSF and other large foundations supporting academic research in big data and for companies developing big data business models.

Examples of Ethics Codes and Policies Relevant to Big Data:

Records, Computers and the Rights of Citizens

1973 report from the US Department of Health, Education and Welfare that set early policies for the ethical use of “automated personal data systems.” Core recommendations for “fair information practices principles” (or FIPPs, now a common term in computing industry) presage challenges currently discussed regarding Big Data, such as the negative consequences of incorrect entries and a prohibition on secret systems that collect personal data.

Personal Privacy in an Information Society: The Report of the Privacy Protection Study Commission

1977 report mandated by the 1974 Privacy Act, which required establishing a register of federal systems that keep personal data records and preventing the release of individual data without meeting certain statutory requirements. The report recognizes the need to balance the power of the government to use personal records to help individuals and communities and the possibility that the same records can be used to “embarrass, harass, and injure the individual.” It categorizes risks now common to big data systems, such as the tremendous challenge of an individual attempting to remove inaccurate or stigmatizing data from cross-referenced data systems. Argues that an effective privacy regulations or legislation must strive to i) minimize intrusiveness, ii) maximize fairness, and iii) to create legitimate, enforceable expectations of confidentiality.

Privacy Online: A Report to Congress

A 1998 report from the Federal Trade Commission (FTC) that established Fair Information Practice Principles (FIPPs) for commercial sites that gather personal data: i) notice/awareness, ii) choices/consent, iii) access/participation, iv) integrity/security. It also outlined plausible enforcement mechanisms for ensuring that FIPPs are followed: self-regulation by the information collectors or an appointed regulatory body; private remedies that give civil causes of action for individuals whose information has been misused to sue violators; and government enforcement that can include civil and criminal penalties levied by the government. The FTC launched a longer-term study to determine whether and how commercial sites would follow FIPPs.

Privacy Online: Fair Information Practices in the Electronic Marketplace: A Federal Trade Commission Report to Congress

A report from 2000 that outlines the need for enforceable information use and collection standards for internet marketplaces. Considered part of a trajectory of federal standards including the 1973, 1977 and 1998 reports cited above. The commission reports that self-regulation efforts outlined in 1998 report had so far been ineffective overall, but that the largest/most popular Internet commerce sites were much more likely to participate in self-regulation schemes than smaller enterprises. These recommendations are still not enforceable by law in the US, however the FTC does prosecute companies that violate their own published privacy and data use policies.

Association of Computing Machinery Code of Ethics and Professional Conduct

The ACM is the largest (~100,000 members) and oldest educational and scientific computing society. The ACM established guidelines for professional ethics in 1972 and substantially revised them in 1992; they have not since updated. Their current code is divided into General Moral Imperatives and Specific Responsibilities. The general moral imperatives situate computer professionals within broad responsibilities to human well-being, and specific responsibilities refer to the duties of a computer professional in daily work-life. The 1992 revisions occurred in part because the ACM leadership realized that the original code focussed too heavily on enforcement and they wanted a code that instead emphasized education and common cause (Anderson et al., 1993). Notably, the ACM Code of Ethics preamble makes reference to a set of supplementary Guidelines that would be regularly updated to keep up with technological and social changes. Those Guidelines have either never been written or are not available via the society’s publications. Their sub-society for Software Engineers has published a more detailed guide for programmers. It thus appears that the ACM has a provision for change but no process.

Institute of Electronic and Electrical Engineers

The IEEE is the largest professional society for electrical engineers with 400,000 members. Its code of ethics is the most minimal of the major societies of computing professionals. It makes little reference to the particular responsibilities of electrical engineers and largely reads as general advice for professional behavior.

Association of Information Technology Professionals

The AITP (until recently known as the Data Processing Management Association, originally an accountant’s society) has a code that asks members to follow a broad Code of Ethics and a narrower Standards of Conduct code that specifies conduct toward management, employers, society, and fellow members of the profession. It is decades old and has some anachronisms that clash with globalized ethos of computing today, such as the principle stating “[I acknowledge] that I have an obligation to my country, therefore, in my personal, business, and social contacts, I shall uphold my nation and shall honor the chosen way of life of my fellow citizens.”

Institute for Certification of Computer Professionals

The ICCP offers certificates for core competencies in computing. It has an ethics code that all certificate holders are expected to follow and specifies procedures for stripping certificates from people who have been found in violation of the code by a panel.

European Cloud

The European cloud computing industry group is currently drafting an ethics code for their sector. Their fairly detailed records offer an uncommon look inside a code as it is being drafted.

Computer Professional for Social Responsibility: Ten Commandments

The CPSR is an organization that promotes the ethical uses of computing/information technology and has generated a number of critical projects about data and computing ethics. Their ethics code is succinct statement of principles in the form of biblical commandments. Notably, it appears to be the only computing ethics code that requires members to proactively consider the broad societal consequences of their programming activities.

American Library Association

The ALA’s ethics code is notable for its recognition that information professionals play a substantial role in curating knowledge and thus have an epistemic and social responsibility to ensure that knowledge is broadly representative of society and accessible to as many people as possible. The other major codes for information science professionals do not place any emphasis on responsibility for producing or sharing knowledge.

The Asilomar Convention for Learning Research in Higher Education

Policies designed in 2014 for data practitioners in education reform and research. Notable for an emphasis on the need to share data for education best practices.

IMIA Code of Ethics for Health Information Professionals

Recognizing the unique, hybrid role of informaticians working with health data, the professional society for medical informatics has aimed to meld ethics codes for medical and informatics professionals into a coherent model.

American Medical Association Code of Medical Ethics

The AMA—the world’s oldest national medical society—first instituted a code of ethics in 1906, and has revised that code four times since. The AMA states that the code, “defines medicine’s integrity and the source of the profession’s authority to self-regulate,” and is unique among policies that explicitly stake the profession’s independence on being an ethical enterprise. It states that the code is a living document meant to evolve with medical science and social mores.

The Nuremberg Code

Following the medical experimentation atrocities and eugenics policies of the Nazis, the international tribunals prosecuting the Nazi doctors in Nuremberg identified the need for international standards for ethical human research practices. The 1948 Nuremberg Code is the first ethical code to establish the basic standards for when human beings may be enrolled in scientific studies, including informed consent, a balancing between plausible benefits to humanity and harm to individuals, and the individual’s right to disenroll at any time. The 10-point code was included in the legal decision condemning the ‘Nazi doctors,’ including Mengele.

The World Medical Association Declaration of Helsinki

The Helsinki Declaration detailed and clarified the principles outlined the Nuremberg Code, and was first adopted in 1964. It clearly states that the well-being of research subjects must take precedence over the advance of knowledge or the well-being of society. The contemporary version adopted in 2013 states that the rights of human research subjects extend to identifiable tissues and data. The Helsinki Declaration and Nuremberg Code have the force of law in many nations. In the US, it has legal status through the Federal Register rules that govern spending by the Department of Health and Human Services.

Belmont Report on Ethical Principles and Guidelines for the Protection of Human Subjects of Research

1979 guidelines for human subjects research that ultimately established Institutional Review Boards as ubiquitous entity at all institutions receiving US federal research funds. Following a series of well-publicized medical and behavioral research scandals, the federal government convened a panel that established core ethical principles to guide human subject research: i) respect for persons, ii) beneficence, and iii) justice. It also established guidelines for carrying out these principles in practice for i) informed consent, ii) assessments of risks and benefits and iii) selection of subjects. This is arguably the most widely influential contemporary professional or research ethics policy. The concepts and codes developed in this report are commonly replicated in subsequent information ethics codes. Notably, institutional ethics bodies (e.g., IRBs) and the major funders are often focussed only on respect for persons and beneficence criteria, and rarely require researchers to address matters of justice.

Society for Professional Journalists

The SPJ code, recently updated in 2014, is notable compared to computing and medical ethics codes for the emphasis on personal character and civic duty. Whereas other ethics codes place an emphasis on organizational character, journalists are framed as fundamentally independent actors with individual obligations to the public good. Indeed, one of the subsections of the code is titled “Act Independently.” It is thus not surprising that their are no enforcement mechanisms specified. Many of their specific duties are indexed to truth telling, such as reserving anonymity only for sources who face retribution.

National Union of Journalists

The NUJ code of ethics has a feature unique amongst the others considered here. It simply states “A journalist:” and then proceeds to list the moral features and obligations of a journalist. No other ethics code makes such strong link between professional identity and moral character, where to be a member of the profession requires commitment to certain virtues (as opposed to commitment to ethical acts).

New York Times Ethical Journalism Handbook

The NYT Handbook places substantial emphasis on the duty to maintain the good reputation of the organization as a place that does reporting “without fear or favor.” Following a consideration of the moral character of journalists it identifies in great detail the specific procedures its reportorial and editorial staff must follow to maintain integrity and independence.

Further resources focussed on Big Data ethics policies and codes:

  1. In September 2014, Stanford’s Center on Philanthropy and Civil Society hosted a workshop on the Ethics of Big Data in Civil Society. A core question posed at the workshop was whether ethics policies should be targeted at each industry or by broad categories of use case. How we answer that question shapes what organizations we believe should be setting and enforcing such policies.
    1. Provocation piece from the conference by Andrew K Woods: DO CIVIL SOCIETY’S DATA PRACTICES CALL FOR NEW ETHICAL GUIDELINES
  2. Additional reading list compiled by conference organizers
  3. Case studies
  4. Outputs from the Responsible Data Forum’s data ethics events
  5. National EthicsCORE Computer & Information Sciences and Engineering Resources

The EthicsCORE site is run by National Center for Research and Professional Ethics, a resource funded substantially by the NSF. The compsci and infosci resources are notably thin, and none focus on big data.

  1. Center for the Study of Ethics in the Professions
  2. Reflections on the process of authoring an ethics code
  3. Collection of computer science and information technology ethics code

 

Works Cited

Anderson, Ronald E., Deborah G. Johnson, Donald Gotterbarn, and Judith Perrolle. 1993. “Using the New ACM Code of Ethics in Decision Making.” Communications of the ACM 36 (2): 98–107.

Appelbaum, Paul S., Erik Parens, Cameron R. Waldman, Robert Klitzman, Abby Fyer, Josue Martinez, W. Nicholson Price, and Wendy K. Chung. 2014. “Models of Consent to Return of Incidental Findings in Genomic Research.” Hastings Center Report 44 (4): 22–32. doi:10.1002/hast.328.

Cassell, Eric J. 2000. “The Principles of the Belmont Report Revisited: How Have Respect for Persons, Beneficence, and Justice Been Applied to Clinical Medicine?” The Hastings Center Report 30 (4): 12. doi:10.2307/3527640.

Deuze, M. 2005. “What Is Journalism?: Professional Identity and Ideology of Journalists Reconsidered.” Journalism 6 (4): 442–64. doi:10.1177/1464884905056815.

Frankel, Mark S. 1989. “Professional Codes: Why, How, and with What Impact?” Journal of Business Ethics 8 (2-3): 109–15.

Fullerton, Stephanie M., Wendy A. Wolf, Kyle B. Brothers, Ellen Wright Clayton, Dana C. Crawford, Joshua C. Denny, Philip Greenland, et al. 2012. “Return of Individual Research Results from Genome-Wide Association Studies: Experience of the Electronic Medical Records and Genomics (eMERGE) Network.” Genetics in Medicine: Official Journal of the American College of Medical Genetics 14 (4): 424–31. doi:10.1038/gim.2012.15.

Garrison, Nanibaa’ A., and David Magnus. 2012. “The Instrumental Role of Hospital Ethics Committees in Policy Work.” The American Journal of Bioethics 12 (11): 1–2. doi:10.1080/15265161.2012.729935.

Gaumnitz, Bruce R., and John C. Lere. 2002. “Contents of Codes of Ethics of Professional Business Organizations in the United States.” Journal of Business Ethics 35 (1): 35–49.

———. 2004. “A Classification Scheme for Codes of Business Ethics.” Journal of Business Ethics 49 (4): 329–35.

Group), Science & Justice Research Center (Collaborations, and others. 2013. “Experiments in Collaboration.” Pbio. 1001619. http://scijust.ucsc.edu/wp-content/uploads/2011/12/journal.pbio_.1001619-1.pdf.

Hazin, Ribhi, Kyle B. Brothers, Bradley A. Malin, Barbara A. Koenig, Saskia C. Sanderson, Mark A. Rothstein, Marc S. Williams, Ellen W. Clayton, and Iftikhar J. Kullo. 2013. “Ethical, Legal, and Social Implications of Incorporating Genomic Information into Electronic Health Records.” Genetics in Medicine: Official Journal of the American College of Medical Genetics 15 (10): 810–16. doi:10.1038/gim.2013.117.

Jonsen, Albert R. 2003. The Birth of Bioethics. Vol. 23. 6. Oxford University Press.

Jonsen, Albert R., Shana Alexander, Judith P. Swazey, Warren T. Reich, Robert M. Veatch, Daniel Callahan, Tom L. Beauchamp, et al. 1993. “Special Supplement: The Birth of Bioethics.” The Hastings Center Report 23 (6): S1. doi:10.2307/3562928.

Joyner, Brenda E., and Dinah Payne. 2002. “Evolution and Implementation: A Study of Values, Business Ethics and Corporate Social Responsibility.” Journal of Business Ethics 41 (4): 297–311.

Kaptein, Muel, and Johan Wempe. 1998. “Twelve Gordian Knots When Developing an Organizational Code of Ethics.” Journal of Business Ethics 17 (8): 853–69.

Koenig, Barbara A. 2014. “Have We Asked Too Much of Consent?” The Hastings Center Report 44 (4): 33–34. doi:10.1002/hast.329.

Laudon, Kenneth C. 1995. “Ethical Concepts and Information Technology.” Communications of the ACM 38 (12): 33–39.

Martin, Dianne C., and David H. Martin. 1990. “Professional Codes of Conduct and Computer Ethics Education.” ACM SIGCAS Computers and Society 20 (2): 18–29.

McBride, Kelly, and Tom Rosenstiel. 2013. The New Ethics of Journalism: Principles for the 21st Century. CQ Press.

McGEE, Glenn, Joshua P. Spanogle, Arthur L. Caplan, Dina Penny, and David A. Asch. 2002. “Successes and Failures of Hospital Ethics Committees: A National Survey of Ethics Committee Chairs.” Cambridge Quarterly of Healthcare Ethics 11 (01): 87–93.

McGuire, A. L., M. Basford, L. G. Dressler, S. M. Fullerton, B. A. Koenig, R. Li, C. A. McCarty, et al. 2011. “Ethical and Practical Challenges of Sharing Data from Genome-Wide Association Studies: The eMERGE Consortium Experience.” Genome Research 21 (7): 1001–7. doi:10.1101/gr.120329.111.

Meyers, Christopher. 2010. Journalism Ethics: A Philosophical Approach. Oxford University Press.

Oz, Effy. 1993. “Ethical Standards for Computer Professionals: A Comparative Analysis of Four Major Codes.” Journal of Business Ethics 12 (9): 709–26.

Payne, Dinah, and Brett J. L. Landry. 2005. “Similarities in Business and IT Professional Ethics: The Need for and Development of A Comprehensive Code of Ethics.” Journal of Business Ethics 62 (1): 73–85. doi:10.1007/s10551-005-3439-3.

Radin, Joanna. 2014. “Collecting Human Subjects: Ethics and the Archive in the History of Science and the Historical Life Sciences.” Curator: The Museum Journal 57 (2): 249–58. doi:10.1111/cura.12065.

Reardon, Jenny. 2013. “On the Emergence of Science and Justice.” Science, Technology & Human Values 38 (2): 176–200. doi:10.1177/0162243912473161.

Rothman, David J. 1992. Strangers at the Bedside: A History of How Law and Bioethics Transformed Medical Decision Making. Basic Books.

Scheirton, Linda S. 1992. “Determinants of Hospital Ethics Committee Success.” HEC Forum 4 (6): 342–59. doi:10.1007/BF02217981.

Suver, Christine, John Wilbanks, and Stephen H. Friend. 2013. “US-EU Scientific Research Collaborations: Sage Bionetworks’ Experience Navigating the Complex Regulatory Landscape.” Available at SSRN 2234133. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2234133.

Systems, Welfare Secretary’s Advisory Committee on Automated Personal Data. 1973. Records, Computers, and the Rights of Citizens: Report. US Department of Health, Education & Welfare.

Tene, Omer, and Jules Polonetsky. 2013. “A Theory of Creepy: Technology, Privacy and Shifting Social Norms.” Yale JL & Tech. 16: 59–134.

Tucker, Lewis R., Vlasis Stathakopolous, and Charles H. Patti. 1999. “A Multidimensional Assessment of Ethical Codes: The Professional Business Association Perspective.” Journal of Business Ethics 19 (3): 287–300.

Ward, Stephen J. A. 2005. Invention of Journalism Ethics: The Path to Objectivity and Beyond. McGill-Queen’s Press – MQUP.

Woods, Andrew. 2014. “Do Civil Society’s Data Practices Call for New Ethical Guidelines.” In . Stanford.

 

[1] An example of ‘emergent ethical breaches’ is the risk of stringing together ethically sourced databases with dubiously sourced databases.

[2] In many cases the mandate for more ethics education is funded but the content is not specified. This is an odd situation—a funded mandate that is essentially an empty vessel. Science and engineering departments have struggled to produce content and this has created new opportunities for ethicists and science studies scholars (Science and Justice Research Center Collaborative Writing Group, 2013). This may be a good opportunity for D&S to explore.

[3] Notably, the ACM’s code of ethics explicitly recognizes this dynamic under the heading “Organizational Leadership Imperatives.”

*Funding for this Council was provided by the National Science Foundation (#IIS-1413864).