What are the privacy and confidentiality statutes that VHA is required to follow?

Ethical health research and privacy protections both provide valuable benefits to society. Health research is vital to improving human health and health care. Protecting patients involved in research from harm and preserving their rights is essential to ethical research. The primary justification for protecting personal privacy is to protect the interests of individuals. In contrast, the primary justification for collecting personally identifiable health information for health research is to benefit society. But it is important to stress that privacy also has value at the societal level, because it permits complex activities, including research and public health activities to be carried out in ways that protect individuals’ dignity. At the same time, health research can benefit individuals, for example, when it facilitates access to new therapies, improved diagnostics, and more effective ways to prevent illness and deliver care.

The intent of this chapter1 is to define privacy and to delineate its importance to individuals and society as a whole. The value and importance of health research will be addressed in Chapter 3.

CONCEPTS AND VALUE OF PRIVACY

Definitions

Privacy has deep historical roots (reviewed by Pritts, 2008; Westin, 1967), but because of its complexity, privacy has proven difficult to define and has been the subject of extensive, and often heated, debate by philosophers, sociologists, and legal scholars. The term “privacy” is used frequently, yet there is no universally accepted definition of the term, and confusion persists over the meaning, value, and scope of the concept of privacy. At its core, privacy is experienced on a personal level and often means different things to different people (reviewed by Lowrance, 1997; Pritts, 2008). In modern society, the term is used to denote different, but overlapping, concepts such as the right to bodily integrity or to be free from intrusive searches or surveillance. The concept of privacy is also context specific, and acquires a different meaning depending on the stated reasons for the information being gathered, the intentions of the parties involved, as well as the politics, convention and cultural expectations (Nissenbaum, 2004; NRC, 2007b).

Our report, and the Privacy Rule itself, are concerned with health informational privacy. In the context of personal information, concepts of privacy are closely intertwined with those of confidentiality and security. However, although privacy is often used interchangeably with the terms “confidentiality” and “security,” they have distinct meanings. Privacy addresses the question of who has access to personal information and under what conditions. Privacy is concerned with the collection, storage, and use of personal information, and examines whether data can be collected in the first place, as well as the justifications, if any, under which data collected for one purpose can be used for another (secondary)2 purpose. An important issue in privacy analysis is whether the individual has authorized particular uses of his or her personal information (Westin, 1967).

Confidentiality safeguards information that is gathered in the context of an intimate relationship. It addresses the issue of how to keep information exchanged in that relationship from being disclosed to third parties (Westin, 1976). Confidentiality, for example, prevents physicians from disclosing information shared with them by a patient in the course of a physician–patient relationship. Unauthorized or inadvertent disclosures of data gained as part of an intimate relationship are breaches of confidentiality (Gostin and Hodge, 2002; NBAC, 2001).

Security can be defined as “the procedural and technical measures required (a) to prevent unauthorized access, modification, use, and dissemination of data stored or processed in a computer system, (b) to prevent any deliberate denial of service, and (c) to protect the system in its entirety from physical harm” (Turn and Ware, 1976). Security helps keep health records safe from unauthorized use. When someone hacks into a computer system, there is a breach of security (and also potentially, a breach of confidentiality). No security measure, however, can prevent invasion of privacy by those who have authority to access the record (Gostin, 1995).

The Importance of Privacy

There are a variety of reasons for placing a high value on protecting the privacy, confidentiality, and security of health information (reviewed by Pritts, 2008). Some theorists depict privacy as a basic human good or right with intrinsic value (Fried, 1968; Moore, 2005; NRC, 2007a; Terry and Francis, 2007). They see privacy as being objectively valuable in itself, as an essential component of human well-being. They believe that respecting privacy (and autonomy) is a form of recognition of the attributes that give humans their moral uniqueness.

The more common view is that privacy is valuable because it facilitates or promotes other fundamental values, including ideals of personhood (Bloustein, 1967; Gavison, 1980; Post, 2001; Solove, 2006; Taylor, 1989; Westin, 1966) such as:

  • Personal autonomy (the ability to make personal decisions)

  • Individuality

  • Respect

  • Dignity and worth as human beings

The bioethics principle nonmaleficence3 requires safeguarding personal privacy. Breaches of privacy and confidentiality not only may affect a person’s dignity, but can cause harm. When personally identifiable health information, for example, is disclosed to an employer, insurer, or family member, it can result in stigma, embarrassment, and discrimination. Thus, without some assurance of privacy, people may be reluctant to provide candid and complete disclosures of sensitive information even to their physicians. Ensuring privacy can promote more effective communication between physician and patient, which is essential for quality of care, enhanced autonomy, and preventing economic harm, embarrassment, and discrimination (Gostin, 2001; NBAC, 1999; Pritts, 2002). However, it should also be noted that perceptions of privacy vary among individuals and various groups. Data that are considered intensely private by one person may not be by others (Lowrance, 2002).

But privacy has value even in the absence of any embarrassment or tangible harm. Privacy is also required for developing interpersonal relationships with others. Although some emphasize the need for privacy to establish intimate relationships (Allen, 1997), others take a broader view of privacy as being necessary to maintain a variety of social relationships (Rachels, 1975). By giving us the ability to control who knows what about us and who has access to us, privacy allows us to alter our behavior with different people so that we may maintain and control our various social relationships (Rachels, 1975). For example, people may share different information with their boss than they would with their doctor.

Most discussions on the value of privacy focus on its importance to the individual. Privacy can be seen, however, as also having value to society as a whole (Regan, 1995). Privacy furthers the existence of a free society (Gavison, 1980). For example, preserving privacy from widespread surveillance can be seen as protecting not only the individual’s private sphere, but also society as a whole: Privacy contributes to the maintenance of the type of society in which we want to live (Gavison, 1980; Regan, 1995).

Privacy can foster socially beneficial activities like health research. Individuals are more likely to participate in and support research if they believe their privacy is being protected. Protecting privacy is also seen by some as enhancing data quality for research and quality improvement initiatives. When individuals avoid health care or engage in other privacy-protective behaviors, such as withholding information, inaccurate and incomplete data are entered into the health care system. These data, which are subsequently used for research, public health reporting, and outcomes analysis, carry with them the same vulnerabilities (Goldman, 1998).

The bioethics principle of respect for persons also places importance on individual autonomy, which allows individuals to make decisions for themselves, free from coercion, about matters that are important to their own well-being. U.S. society also places a high value on individual autonomy, and one way to respect persons and enhance individual autonomy is to ensure that people can make the choice about when, and whether, personal information (particularly sensitive information) can be shared with others.

Public Views of Health Information Privacy

American society places a high value on individual rights, personal choice, and a private sphere protected from intrusion. Medical records can include some of the most intimate details about a person’s life. They document a patient’s physical and mental health, and can include information on social behaviors, personal relationships, and financial status (Gostin and Hodge, 2002). Accordingly, surveys show that medical privacy is a major concern for many Americans, as outlined below (reviewed by Pritts, 2008; Westin, 2007). As noted in Chapter 1, however, there are some limits to what can be learned from surveys (Tourangeau et al., 2000; Wentland, 1993; Westin, 2007). For example, how the questions and responses are worded and framed can significantly influence the results and their interpretation. Also, responses are biased when respondents self-report measures of attitudes, behavior, and feelings in such a way as to represent themselves favorably.

In a 1999 survey of consumer attitudes toward health privacy, three out of four people reported that they had significant concerns about the privacy and confidentiality of their medical records (Forrester Research, 1999). In a more recent survey, conducted in 2005 after the implementation of the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule, 67 percent of respondents still said they were concerned about the privacy of their medical records, suggesting that the Privacy Rule had not effectively alleviated public concern about health privacy. Ethnic and racial minorities showed the greatest concern among the respondents. Moreover, the survey showed that many consumers were unfamiliar with the HIPAA privacy protections. Only 59 percent of respondents recalled receiving a HIPAA privacy notice, and only 27 percent believed they had more rights than they had before receiving the notice (Forrester Research, 2005). One out of eight respondents also admitted to engaging in behaviors intended to protect their privacy, even at the expense of risking dangerous health effects. These behaviors included lying to their doctors about symptoms or behaviors, refusing to provide information or providing inaccurate information, paying out of pocket for care that is covered by insurance, and avoiding care altogether (Forrester Research, 2005).

A series of polls conducted by Harris Interactive suggest, however, that the privacy of health information has improved since implementation of the Privacy Rule. Prior to its creation, a 1993 survey by Harris Interactive showed that 27 percent of Americans believed their personal medical information had been released improperly in the past 3 years. In contrast, 14 percent and 12 percent of respondents believed this had happened to them in 2005 and 2007, respectively (Harris Interactive, 2005 Harris Interactive, 2007). In the 2005 survey, about two-thirds of respondents reported having received a HIPAA privacy notice, and of these people, 67 percent said the privacy notice increased their confidence that their medical information is being handled properly (Harris Interactive, 2005).

Responses to other questions on recent public opinion polls conducted by Harris Interactive only partially corroborate these findings. In one survey, 70 percent of respondents indicated that they are generally satisfied with how their personal health information is handled with regard to privacy protections and security. Nearly 60 percent of the respondents reported that they believe the existing federal and state health privacy pro tection laws provide a reasonable level of privacy protection for their health information (Harris Interactive, 2005). Nonetheless, half of the respondents also believed that “[P]atients have lost all control today over how their medical records are obtained and used by organizations outside the direct patient health care such as life insurers, employers, and government health agencies.” In another survey, 83 percent of respondents reported that they trust health care providers to protect the privacy and confidentiality of their personal medical records and health information (Westin, 2007). However, in that survey, 58 percent of respondents believed the privacy of personal medical records and health information is not protected well enough today by federal and state laws and organizational practices.

A number of studies suggest that the relative strength of privacy, confidentiality, and security protections can play an important role in people’s concerns about privacy (reviewed by Pritts, 2008). When presented with the possibility that there would be a nationwide system of electronic medical records, one survey found 70 percent of respondents were concerned that sensitive personal medical record information might be leaked because of weak data security, 69 percent expressed concern that there could be more sharing of medical information without the patient’s knowledge, and 69 percent were concerned that strong enough data security will not be installed in the new computer system.

Confidentiality is particularly important to adolescents who seek health care. When adolescents perceive that health services are not confidential, they report that they are less likely to seek care, particularly for reproductive health matters or substance abuse (Weddle and Kokotailo, 2005). In addition, the willingness of a person to make self-disclosures necessary to mental health and substance abuse treatment may decrease as the perceived negative consequences of a breach of confidentiality increase (Petrila, 1999; Roback and Shelton, 1995; Taube and Elwork, 1990). These studies show that protecting the privacy of health information is important for ensuring that individuals seek and obtain quality care.

The potential for economic harm resulting from discrimination in health insurance and employment is also a concern for many people (reviewed by Pritts, 2008). Polls consistently show that people are most concerned about insurers and employers accessing their health information without their permission (Forrester Research, 2005; PSRA, 1999). This concern arises from fears about employer and insurer discrimination. Concerns about employer discrimination based on health information, in particular, increased 16 percent between 1999 and 2005, with 52 percent of respondents in the later survey expressing concern that their information might be seen by an employer and used to limit job opportunities (Forrester Research, 2005; PSRA, 1999). Reports alleging that major employers such as Wal-Mart base some of their hiring decisions on the health of applicants suggest that these concerns may be justified (Greenhouse and Barbaro, 2005).

Studies show that individuals are especially concerned about genetic information being used inappropriately by their insurers and employers (reviewed by Pritts, 2008). Even health care providers appear to be affected by these concerns. In a survey of cancer-genetics specialists, more than half indicated that they would pay out of pocket rather than bill their insurance companies for genetic testing, for fear of genetic discrimination (Hudson, 2007). Although surveys do not reveal a significant percentage of individuals who have experienced such discrimination, geneticists have reported that approximately 550 individuals were refused employment, fired, or denied life insurance based on their genetic constitution (NBAC, 1999). In addition, a study in the United Kingdom suggested that life insurers in that country do not have a full grasp on the meaning of genetic information and do not assess or act in accord with the actuarial risks presented by the information (Low et al., 1998). There is, therefore, some legitimate basis to individuals’ concerns about potential economic harm and the need to protect the privacy of their genetic information. Recent passage of the Genetic Information Nondiscrimination Act in the United States will hopefully begin to address some of these concerns.4

Patient Attitudes About Privacy in Health Research

Ideally, there would be empirical evidence regarding the privacy value of all the specific Privacy Rule provisions that impact researchers, but there are only limited data on this topic from the consumer/patient perspective. A few studies have attempted to examine the public’s attitudes about the use of health information in research. However, few have attempted to do so with respect to the intricacies of the protections afforded by the Privacy Rule or the Common Rule,5 which are likely not well known to the public.

A review by Westin of 43 national surveys with health privacy questions fielded between 1993 and September 2007 identified 9 surveys6 with one or more questions about health research and privacy (Westin, 2007). In some, the majority of respondents were not comfortable with their health information being provided for health research except with notice and express consent. But in others, a majority of respondents were willing to forgo notice and consent if various safeguards and specific types of research were offered. For example, a recent Harris Poll found that 63 percent of respondents would give general consent to the use of their medical records for research, as long as there were guarantees that no personally identifiable health information would be released from such studies (Harris Interactive, 2007). This is similar to the percentage of people willing to participate in a “clinical research study” (Research!America, 2007; Woolley and Propst, 2005) (see also Chapter 3). A 2006 British survey also found strong support for the use of personally identifiable information without consent for public health research and surveillance, via the National Cancer Registry (Barrett et al., 2007).

Westin noted that opinions varied in the surveys according to developments on the health care scene and with consumer privacy trends. He concluded from this review that the majority of consumers are positive about health research, and if asked in general terms, support their medical information being made available for research. However, he also noted that most of these surveys presented the choice in ways that did not articulate the key permission process, and that there was much ambiguity in who “researchers” are, what kind of “health research” is involved, and how the promised protection of personal identities would be ensured (Westin, 2007).

Reviewing the handful of detailed studies examining patient views of the use of their medical information in research through surveys, structured interviews, or focus groups, Pritts determined that a number of common themes emerge (reviewed by Pritts, 2008):

In studies where patients were able to provide unstructured comments, they expressed concern about the potential that anonymized data would be reidentified. They were also concerned that insurers or employers or others who could discriminate against subjects could potentially access informa tion maintained by researchers (Damschroder et al., 2007; Kass et al., 2003; Robling et al., 2004). Some feared that researchers would sell information to drug companies or other third parties (Damschroder et al., 2007).

Although supportive of research, the majority of patients in these studies expressed a desire to be consulted before their information was released for research (Damschroder et al., 2007; Kass et al., 2003; Robling et al., 2004; Westin, 2007; Whiddett et al., 2006; Willison et al., 2007). Some surveys also show that even if researchers would receive no directly identifying information (e.g., name, address, and health insurance number), the majority of respondents still wanted to have some input before their medical records were disclosed (Damschroder et al., 2007; Robling et al., 2004; Willison et al., 2007). For example, in a 2005 Australian survey, 67 percent of respondents indicated they would be willing to allow their deidentified health records to be used for medical research purposes, but 81 percent wanted to be asked first (Flannery and Tokley, 2005).

Studies indicate that public support for research and willingness to share health information can vary with the purpose or type of activity being conducted (reviewed by Pritts, 2008). Studies have found there was less support for activities that were primarily for a commercial purpose, or that might be used in a manner that would not help patients (Damschroder et al., 2007; Willison et al., 2007). Some participants expressed concern that some researchers were motivated by monetary rewards and that decision makers would act out of self-interest (Damschroder et al., 2007).

One recent study suggests that the biggest predictor of whether patients are willing to share their medical records with researchers is the patients’ trust that their information will be kept private and confidential (Damschroder et al., 2007). In this study, the patients who most trusted the Veterans Affairs system to keep their medical records private were more likely to accept less stringent requirements for informed consent. Thirty-four percent of veterans who participated in intensive focus groups using deliberative democracy were willing to allow researchers associated with the Veterans Health Administration to use their medical records without any procedures for patient input, subject to Institutional Review Board (IRB) approval, and another 17 percent reported that patients should have to ask for their medical records to be excluded from research studies (opt-out).

But participants in focus groups also have expressed a desire to be informed of how their health information was used for research. This desire was tied to a sense of altruism—they wanted to know that their information was useful and that they may have contributed to helping others by allowing their medical records to be used for research (Damschroder et al., 2007; Robling et al., 2004). The veterans also recommended methods to give research participants more control over how their medical records are used in research. These recommendations included requiring that participants are fully informed about how their medical records are being used in research; providing assurances that the research being conducted will benefit fellow veterans; updating research participants about findings and ongoing research; and setting out clear and consistent consequences for anyone who violates a patient’s privacy (Damschroder et al., 2007).

The recent Harris poll7 commissioned by the Institute of Medicine (IOM) committee for this study found that 8 percent of respondents had been asked to have their medical information used in research, but declined. When asked why, 30 percent indicated they were concerned about the privacy and confidentiality of their personal information, but many other reasons were also commonly cited (ranging from 5 to 24 percent of respondents), including worry that participation would be risky, painful, or unpleasant; lack of trust in the researchers; or belief that it would not help their condition or their family (Westin, 2007).

Some studies also suggest that individuals’ attitudes toward the use of their medical records in research may be influenced by a person’s state of health. Although the commissioned Harris Poll found that people who are in only fair health, who have a disability, or who had taken a genetic test were slightly more concerned than the public about health researchers seeing their medical records (55 percent versus 50 percent), other data suggest that people with health concerns may be more supportive of using medical records in research. For example, qualitative market research by the National Health Council showed that individuals with chronic conditions have a very favorable attitude toward the implementation of electronic personal health records (EPHRs). During the focus group discussions, participants noted that EPHRs could be very advantageous in medical research and were supportive of this use even though many had expressed concern about the privacy and confidentiality of EPHRs (Balch et al., 2005 , 2006). Although the Council did not specifically ask about attitudes toward health research and privacy, these results suggest that individuals with chronic conditions may be more likely to grant researchers access to their medical records, and to place less emphasis on protecting privacy than members of the general population.

Also, a Johns Hopkins University survey of patients having, or at risk for, serious medical conditions examined these patients’ attitudes about the use of their medical records in research, and compared those results to polls from the general population. Thirty-one percent of respondents stated that medical researchers should have access to their medical records without their permission if it would help to advance medical knowledge.

In contrast, the recent Harris poll of the public found that 19 percent of respondents would be willing to forgo consent to use personal medical and health information, as long as the study never revealed their identity and it was supervised by an IRB (Westin, 2007). An additional 8 percent indicated they would be willing to give general consent in advance to have personally identifiable medical or health information used in future research projects without the researchers having to contact them, and 1 percent said researchers should be free to use their personal medical and health information without their consent at all. Thus, 28 percent of respondents would be willing to grant researchers access to their medical records without giving specific consent for each research project. Thirty-eight percent believed they should be asked to consent to each research study seeking to use their personally identifiable medical or health information, and 13 percent did not want researchers to contact them or to use their personal or health information under any circumstances. However, those who preferred not to be contacted at all were actually less likely than those who would grant conditional permission to have declined participating in a research study. Notably, 20 percent of respondents were unsure how to respond to the question about notice and consent for research.

Among the 38 percent who said they wanted notice and consent, 80 percent indicated that they would want to know the purpose of the research, and 46 percent wanted to know specifically whether the research could help their health condition or those of family members. Sixty-two percent indicated that knowing about the specific research study and who would be running it would allow the respondent to decide whether to trust the researchers. A little more than half of the respondents (54 percent) said they would be worried that their personally identifiable information may be disclosed outside the study. Among those 54 percent, three-quarters agreed with the statement “I would feel violated and my trust in the researchers betrayed.” Between 39 and 67 percent were concerned about discrimination in a government program, by an employer, or in obtaining life or health insurance (Westin, 2007).

However, about 70 percent of all respondents indicated that they trusted health researchers to protect the privacy and confidentiality of the medical records and health information they obtain about research participants. Furthermore, among respondents who had participated in health research, only 2 percent reported that any of their personally identifiable medical information used in a study was given to anyone outside the research staff, and half of those disclosures were actually made to other researchers or research institutions (Westin, 2007).

In summary, very limited data are available to assess the privacy value of the Privacy Rule provisions that impact researchers. Surveys indicate that the public is deeply concerned about the privacy and security of personal health information, and that the HIPAA Privacy Rule has perhaps reduced—but not eliminated—those concerns. Patients were generally very supportive of research, provided safeguards were established to protect the privacy and security of their medical information, although some surveys indicate that a significant portion of the public would still prefer to control access to their medical records via consent, even if the information is anonymized. Studies indicate that public support for research and willingness to share health information varies with health status and the type of research conducted, and depends on the patients’ trust that their information will be kept private and confidential. An understanding the public’s attitude toward privacy is important throughout the rest of this report, because many of the IOM committee’s recommendations affect the nature of the privacy protections afforded by the federal health research regulations.

The medical community has long recognized the importance of protecting privacy in maintaining public trust in doctors and researchers, and codes of medical ethics reflect a desire to increase this public trust. Since the time of Hippocrates, physicians have pledged to keep information about their patients private and confidential (Feld and Feld, 2005). The Hippocratic Oath states, “What I may see or hear in the course of the treatment or even outside of the treatment in regard to the life of men, which on no account one must spread abroad, I will keep to myself….” This pledge to privacy has been included in the code of ethics of nearly all health care professionals in the United States. For example, the first Code of Ethics of the American Medical Association in 1847 included the concept of confidentiality (OTA, 1993).

The value of health information privacy has also been recognized by affording it protection under the law (reviewed by Pritts, 2008). The rules for protecting the privacy of health information in the clinical care and health research contexts developed along fairly distinct paths until the promulgation of the federal privacy regulations under HIPAA.8 Prior to HIPAA, health information in the clinical setting was protected primarily under a combination of federal and state constitutional law, as well as state common law and statutory protections (Box 2-1).

What are the privacy and confidentiality statutes that VHA is required to follow?

BOX 2-1

Overview of Privacy Protections in the Law. Constitutional Protections Both federal and state constitutions generally afford citizens some protection for the privacy of their health information. However, with limited exceptions, (more...)

In contrast, research practices have been governed largely by federal regulations called the Common Rule, which have historically focused on protecting individuals from physical and mental harm in clinical trials (see subsequent sections of this chapter). Although the standards apply to research that uses personally identifiable health information, the protection of information is not their primary focus.

Principles of Fair Information Practice

The framework in which detailed statutory and regulatory protections of privacy originated was in the 1973 report of an advisory committee to the U.S. Department of Health, Education and Welfare (HEW), “designed to call attention to issues of recordkeeping practice in the computer age that may have profound significance for us all” (HEW, 1973). The principles were intended to “provide a basis for establishing procedures that assure the individual a right to participate in a meaningful way in decisions about what goes into records about him and how that information shall be used” (HEW, 1973). In addition to affording individuals the meaningful right to control the collection, use, and disclosure of their information, the fair information practices also impose affirmative responsibilities to safeguard information on those who collect it (reviewed by Pritts, 2008).

The fundamental principles of fair information practice articulated in the report have since been amplified and adopted in various forms at the international, federal, and state levels (Gelman, 2008). The fair information practices endorsed by the Organisation for Economic Co-operation and Development (OECD), which have been widely cited, include the following principles (OECD, 1980):

  • Collection Limitation

    There should be limits to the collection of personal data, and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject.

  • Data Quality

    Personal data should be relevant to the purposes for which they are to be used, and to the extent necessary for those purposes, should be accurate, complete, and kept updated.

  • Purpose Specification

    The purposes for which personal data are collected should be specified not later than at the time of data collection, and the subsequent use limited to the fulfillment of those purposes or such others as are not incompatible with those purposes, and as are specified on each occasion of change of purpose.

  • Use Limitation

    Personal data should not be disclosed, made available, or otherwise used for purposes other than those specified in accordance with [the Purpose Specification] except:

    1. with the consent of the data subject; or

    2. by the authority of law.

  • Personal data should be protected by reasonable security safeguards against such risks as loss or unauthorized access, destruction, use, modification, or disclosure of data.

  • Openness

    There should be a general policy of openness about developments, practices, and policies with respect to personal data. Means should be readily available of establishing the existence and nature of personal data, and the main purposes of their use, as well as the identity and usual residence of the data controller.

  • Individual Participation

    An individual should have the right to know whether a data controller has data relating to him/her, to obtain a copy of the data within a reasonable time in a form that is intelligible to him/her, to obtain a reason if the request for access is denied, to challenge such a denial, to challenge data relating to him/her, and, if the challenge is successful, to have the data erased, rectified, completed, or amended.

  • Accountability

    A data controller should be accountable for complying with measures, which give effect to the principles stated above.

These principles have been adopted at the federal and state levels to varying degrees. The United States has taken a sector-driven approach toward adopting the principles of fair information practices, with the federal and state governments promulgating statutes and regulations that apply only to specific classes of record keepers or categories of records.9 , 10

At the federal level, the fair information practices were first incorporated into the Privacy Act of 1974, which governs the collection, use, and disclosure of personally identifiable data held by the federal government and some of its contractors. Hospitals operated by the federal government and health care or research institutions operated under federal contract are subject to the Privacy Act, while other health care entities remained outside its scope (Gostin, 1995). Nevertheless, the Privacy Act afforded perhaps the broadest protection for health information at the federal level until the promulgation of the HIPAA Privacy Rule.

For their part, states have adopted (and continue to adopt) laws that not only mirror the Privacy Act in protecting government-held records, but also that afford broader protections for personally identifiable health information held by private parties. However, these principles have not been adopted uniformly among states, resulting in a patchwork of state health privacy laws that provide little consistency from entity to entity or from state to state.

For example, the states have enacted the fair information practice restriction on use and disclosure of information in varying ways (reviewed by Pritts, 2008). Some allow the disclosure of health information for research without the individual’s permission and others require such permission. Others only require such permission to release only certain types of information for research. Similarly, state statutes vary widely in how they have applied the accountability principle, both in the way they provide remedies for breaches in confidentiality and security and with respect to the standard imposed for initiating a suit. Also, only a few states have statutorily required providers to undertake security measures to ensure that health information is used and disclosed properly.

SECURITY OF HEALTH DATA

Protecting the security of data in health research is important because health research requires the collection, storage, and use of large amounts of personally identifiable health information, much of which may be sensitive and potentially embarrassing. If security is breached, the individuals whose health information was inappropriately accessed face a number of potential harms. The disclosure of personal information may cause intrinsic harm simply because that private information is known by others (Saver, 2006). Another potential danger is economic harm. Individuals could lose their job, health insurance, or housing if the wrong type of information becomes public knowledge. Individuals could also experience social or psychological harm. For example, the disclosure that an individual is infected with HIV or another type of sexually transmitted infection can cause social isolation and/or other psychologically harmful results (Gostin, 2008). Finally, security breaches could put individuals in danger of identity theft (Pritts, 2008).

Protecting the privacy of research participants and maintaining the confidentiality of their data have always been paramount in research and a fundamental tenet of clinical research. However, several highly publicized examples of stolen or misplaced computers containing health data have heightened the public’s concerns about the security of health data (for a list of security breaches in health research, see Table 2-2). The extent to which these breaches have caused tangible harm to the individuals involved is difficult to quantify (Pritts, 2008). A Government Accountability Office (GAO) report studying major security breaches involving nonmedical personal information concluded that most security breaches do not result in identity theft (GAO, 2007). However, the lack of identity theft resulting from past breaches is no guarantee that future breaches will not result in more serious harm. A recent report from the Identity Theft Resources Center found that identity theft is up by 69 percent for the first half of 2008, compared to the same time period in 2007 (ITRC, 2008). Also, regardless of actual harm, security breaches are problematic for health research because they undermine public trust, which is essential for patients to be willing to participate in research (Hodge et al., 1999). A recent study found patients believe that requiring researchers to have security plans encourages researchers to take additional precautions to protect data (Damschroder et al., 2007). Moreover, data security is important to protect because it is a key component of comprehensive privacy practices.

What are the privacy and confidentiality statutes that VHA is required to follow?

TABLE 2-2

Research Security Breaches: 2006–2008.

The HIPAA Security Rule and Its Limitations

The goals of security are threefold: to ensure that (1) only authorized individuals see stored data; (2) they only see the data when they need to use it for an authorized purpose; and (3) what they see is accurate. Traditionally, these goals have been pursued through protections intended to make data processing safe from unauthorized access, alteration, deletion, or transmission. The HIPAA Security Rule employs this traditional solution to protecting security, and sets a floor for data security standards within covered entities (Box 2-2).11

What are the privacy and confidentiality statutes that VHA is required to follow?

BOX 2-2

The HIPAA Security Rule. The final HIPAA Security Standards were adopted on February 20, 2003. Covered entities were required to be in compliance with the regulation on April 21, 2005 (and April 21, 2006, for small health plans). In designing the HIPAA (more...)

The HIPAA Security Rule has several major gaps in security protection. First, like the HIPAA Privacy Rule, the HIPAA Security Rule only applies to covered entities. Many researchers who rely on protected health information (PHI)12 to conduct health research are not covered entities, and thus are not required to implement any of the security requirements outlined in the Security Rule. Although federal research regulations include protections of privacy, there are no other laws that specifically require researchers to implement security protections for research data. Second, the HIPAA Security Rule only protects electronic medical records; it does not require covered entities to implement any security protections for health information stored in paper records. There is an ongoing effort to implement electronic health records. However, many health records now exist only in paper form and may not be securely protected.

Third, many covered entities apparently are not yet in full compliance with all the requirements of the HIPAA Security Rule, based on surveys13 of health care privacy officers and other individuals responsible for implementing the HIPAA regulations conducted by the American Health Information Management Association (AHIMA). The surveys found that although the percentage of respondents who believe their facilities are in full compliance with the HIPAA Security Rule is increasing yearly, the number is still not 100 percent. In 2006, 1 year after implementation of the HIPAA security regulations, 25 percent of respondents described themselves as fully compliant with the Security Rule, and 50 percent described themselves as 85 to 95 percent compliant (compared to 17 percent of respondents in 2005 reporting they were fully compliant, and 43 percent describing themselves as 85 to 95 percent compliant). More than half—54 percent—of respondents reported that their covered entity had upgraded its electronic software system to comply with the HIPAA Security Rule. All the respondents reported that their covered entity has an individual responsible for assessing data protection needs and implementing solutions and staff training (compared to 89 percent in 2005), but the number of facilities reporting that they have an entire committee or task related to security decreased from 2005 (59 percent versus 78 percent) (AHIMA, 2006).

The Centers for Medicare & Medicaid Services (CMS) has the authority to enforce the HIPAA Security Rule, and has received 378 security complaints as of 2008 without issuing any fines or penalties. A recent report issued by the HHS Office of Inspector General evaluated CMS’s oversight and enforcement of the HIPAA Security Rule and “found that CMS had taken limited steps to ensure that covered entities adequately implement security protections” (OIG, 2008). However, a 2008 Resolution Agreement entered into by the U.S. Department of Health and Human Services (HHS) and CMS with Seattle-based Providence Health & Services for breaches of the HIPAA Privacy and Security Rules may indicate that CMS is starting to take a more affirmative approach to enforcement. The agreement requires Providence Health & Services to pay $100,000 and to implement a corrective action plan to ensure electronic patient information is appropriately safeguarded against future security breaches (OCR, 2008). In addition, CMS has recently partnered with PricewaterhouseCoopers to conduct security audits of covered entities to examine how well they are implementing the requirements of the HIPAA Security Rule. Ten to 20 assessments are planned for 2008 (Conn, 2008). Together these actions may have a positive effect on the percentage of covered entities fully compliant with the HIPAA Security Rule.

Regardless of whether the HIPAA Security Rule is actively enforced, the other gaps in the HIPAA Security Rule’s protection of personal health information are problematic because enhanced security is necessary to reduce the risk of data theft and to reinforce the public’s trust in the research community by diminishing anxiety about the potential for unintentional disclosure of information. Thus, the IOM committee recommends that all institutions (both covered entities and non-covered entities) in the health research community that are involved in the collection, use, and disclosure of personally identifiable health information take strong measures to safeguard the security of health data. Given the differences among the missions and activities of institutions in the health research community, some flexibility in the implementation of specific security measures will be necessary.

Examples of measures that institutions should implement include appointment of a security officer on IRBs and Privacy Boards to be responsible for assessing data protection needs and implementing solutions and staff training; use of encryption and encoding techniques, especially for laptops and removable media containing personally identifiable health information; and implementation of a breach notification requirement, so that patients may take steps to protect their identity in the event of a breach (IOM, 2000). More generally, institutions should implement layers of security protections, so that if security fails at one layer the breach will likely be stopped by another layer of security protection. The publication of best practices combined with a cooperative approach to compliance with security standards—such as self-evaluation, security audits, and certification programs—would also promote progress in this area. Research sponsors could play a role in the adoption of best practices in data security, by requiring researchers to implement appropriate security measures prior to providing funding. In addition, the federal government should support the development of technologies to enhance the security of health information.

Examples of security standards and guidelines already exist in some sectors, but they are not widely applied in health research. For instance, the National Institute of Standards and Technology has developed standards and guidance for the implementation of the Federal Information Security Management Act of 2002, which was meant to bolster computer and network security within the federal government and affiliated parties (e.g., government contractors). These include standards for minimum security requirements for information and information systems, as well as guidance for assessing and selecting appropriate security controls for information systems, for determining security control effectiveness, and for certifying and accrediting information systems (NIST, 2007). However, two recent GAO reports found that although the federal government is improving information security performance, a number of significant information security control deficiencies remain (GAO, 2008a). HHS, working through its Office of the National Coordinator for Health Information Technology,14 could play an important role in developing or adapting standards for health research applications, and then encourage and facilitate broader use of such standards in the health research community.

POTENTIAL TECHNICAL APPROACHES TO HEALTH DATA PRIVACY AND SECURITY

The security of data will continue to grow in importance as the health care industry moves toward greater implementation of electronic health records, and Congress has already proposed numerous bills to facilitate and regulate that transition (see also Chapter 6). Advances in information technology will likely make it easier to implement such measures as audit trails and access controls in the future. Although the committee does not recommend a specific technology solution, there are at least four technological approaches to enhancing data privacy and security that have been proposed by others as having the potential to be particularly influential in health research: (1) Privacy-preserving data mining and statistical disclosure limitation, (2) personal electronic health record devices, (3) independent consent management tools, and (4) pseudonymisation. Each seeks to minimize or eliminate the transfer of personally identifiable data (Burkert, 2001). The advantages, limitations, and current feasibility of each are described briefly below.

Privacy-preserving data mining and statistical disclosure limitation. In recent years, a number of techniques have been proposed for modifying or transforming data in such a way so as to preserve privacy while statistically analyzing the data (reviewed in Aggarwal and Yu, 2008; NRC, 2000 , 2005 , 2007b). Typically, such methods reduce the granularity of representation in order to protect confidentiality. There is, however, a natural trade-off between information loss and the confidentiality protection because this reduction in granularity results in diminished accuracy and utility of the data, and methods used in their analysis. Thus, a key issue is to maintain maximum utility of the data without compromising the underlying privacy constraints. In addition, there are a very large number of definitions of privacy and its protection in the statistical disclosure limitation and the privacy-preserving data mining literatures, in part because of the varying goals.

Examples of statistical disclosure limitation and privacy-preserving data mining methods include perturbation methods such as noise addition, which attempts to mask the identifiable attributes of individual records, aggregation methods such as k-anonymity, which attempts to reduce the granularity of representation of the data in such a way that a given record cannot be distinguished from at least (k – 1) other records, the release of summary statistics that can be used for actual statistical analyses such as marginal totals from contingency tables, and various approaches to the generation of synthetic data. Several of these are reviewed in Aggarwal and Yu (2008).

Other technologies include cryptographic methods for distributive privacy protection, which operate by allowing researchers to query various databases online using cryptographic algorithms (Brands, 2007; reviewed in Aggarwal and Yu, 2008), query auditing techniques, and output perturbation using methodology known as differential privacy (many of these techniques are reviewed in Aggarwal and Yu, 2008, and Dwork, 2008). These technologies aim to protect privacy by minimizing the outflow of information to researchers, as the providers of the databases do not make any of the actual data available to the researchers. The principal drawback of many of these methods relates to the potentially limited utility of the released information, especially for secondary analyses not planned in advance.

Each of the methods referred to above have strengths and weaknesses for specific kinds of statistical analyses. Precisely how this body of developing methodologies may be effectively used in the types of health research of the sort envisioned in this report remains an open question and this is an area of active research. Thus, alternative mechanisms for data protection going beyond the removal of obvious identifiers and the application of limited modifications of data elements are required. These mechanisms need to be backed up by legal penalties and sanctions.

Personal electronic health record devices. The use of personal electronic health record devices requires that all individuals possess a personal electronic device, such as a personal digital assistant (PDA) or personal computer, to manage their health information. The electronic device is intended to be used by individuals to aggregate all of their health information into one location (i.e., the electronic device). The infrastructure for implementing this privacy-enhancing technology exists, but there are several serious problems with relying on this technology in health research. First, it is unclear who would provide individuals with the devices, how they would be maintained, and who would bear the cost of the maintenance. Second, it is impossible for researchers to query every single individual for permission to access his/her personal electronic health record device in order to determine if he/she meets the criteria for the relevant study. Only individuals who are on the Internet and are involved in health research could easily be queried. Third, the use of personal electronic devices would make it almost impossible to aggregate data because of the difficulty of accessing data from multiple sources. These problems are sufficiently serious that the use of this technology is unlikely to offer a satisfactory solution to the privacy and security concerns in health research (Brands, 2007).

Independent consent management tools. The independent consent manage ment tool (or infomediary) relies on a health trust to store all of an individual’s health data. When researchers are interested in accessing an individual’s health information for a study, the researchers must contact the health trust. The health trust will then approach the individual and asks whether he/she is willing to give consent for the research. Examples of this technology include Microsoft’s HealthVault, Google Health, and Revolution Health.

Independent consent management tools allow individuals to make blanket consents for their health information to be released for certain types of researchers. For example, an individual can have a standing consent that his/her information can be released to all researchers at the Mayo Clinic, or for all research on cancer, etc. Thus, the use of a health trust allows an individual to have the power of consent for all uses of his/her health information, but does not require a specific consent in all instances (Brands, 2007). Some privacy advocates are very favorable about the use of this technology because they see it as a way to give patients complete control over who can see and use their health information (PPR, 2008).

However, the use of this technology in health research has several major problems. The first problem is that the health trust in this system becomes a “honey pot” (i.e., the health trust holds ALL of an individual’s data). This creates serious trust and security issues because a person’s entire health record is stored in a single entity (Brands, 2007). A 2006 survey of global financial services institutions found that respondents reported that nearly 50 percent of all security breaches were a result of an internal failure (e.g., a virus or worm originating inside the organization, insider fraud, or inadvertent leakage of consumer data) (Melek and MacKinnon, 2006). Many security breaches in health care are likely also a result of internal failures. In addition, these organizations are currently not regulated by the HIPAA Privacy Rule, so there are no legal federal privacy restrictions preventing these entities from releasing individuals’ data to the government, marketing companies, or others, and no mandatory data security requirements. New legislation or regulation making health trusts liable for security breaches may be necessary before the public is willing to trust these organizations to store personal health data (Metz, 2008).

The second major impediment to the widespread adoption of independent consent management tools is the difficulty of providing individuals with secure online access to view their health information. The companies marketing this technology need to develop a mechanism where individuals can access their medical information held by the health trust without endangering its security and privacy. The current methods for individual authentication online do not work well (NRC, 2003), but the use of a strong authentication system in a single domain may solve this problem. The companies will also need to address the fact that a significant portion of the population does not have online access at all (Brands, 2007).

The final problem with using independent consent management systems in health research is the inability to ensure the authenticity and integrity of responses. There is no existing method for the health trusts to provide the researchers with a guarantee that the information contained in their database is accurate. If data are authenticated using existing methods, such as through the use of digital signing, then it is impossible to truly protect the privacy of the individuals’ information being disclosed (NRC, 2003). Cryptographic selective disclosure techniques may be able to solve this problem, but the technology does not exist yet (Brands, 2007).

Pseudonymization. Pseudonymization is a method “used to replace the true identities (nominative) of individuals or organizations in databases by pseudo-identities (pseudo-IDs) that cannot be linked directly to their corresponding nominative identities” (Claerhout and De Moor, 2005). The benefit of using pseudonymization in health research is that it protects individuals’ identities while allowing researchers to link personal data across time and place by relying on the pseudo-IDs.

Most pseudonymization methods use a trusted third party to perform the pseudonymization process. This results in at least three entities being involved in the creation of each database. There is the data source that has access to nominative personal data (e.g., PHI), the trusted third party, and the data register that uses the pseudonymized data for research.

Two methods of pseudonymization are the batch data collection and the interactive data collection. In the batch data collection, the data supplier splits the data into two parts: (1) the identifiers that relate to a specific person (e.g., Social Security number, name), and (2) the payload data, which includes all the nonidentifiable data associated with each individual. The data are prepseudonymized at the data source and transferred to the trusted third party, which converts the prepseudonyms data into a final pseudo-ID. Both the final pseudo-ID and payload data are transferred to the data register, where they are stored and used for research; no data are stored with the trusted third party. Privacy concerns are minimized because the only version of the data that is available to researchers is pseudonymized data.

The interactive data collection is used in situations where neither the data supplier nor the data register has a need for local storage of the data. All the data is stored by a trusted third party in pseudonymous form. Both the data supplier and the data register must query the trusted third party to access the data (Claerhout and De Moor, 2005; De Moor et al., 2003).

It is unclear how technologies relying on pseudonymization would be implemented under the requirements of the HIPAA Privacy Rule. In order for information to be considered deidentified, the HIPAA Privacy Rule specifically states that covered entities can assign a code or other means of record identification (such as a pseudo-ID), but the code cannot be derived from, or related to, information about the subject of the information.15 This means that any pseudo-IDs created using this technology must be based entirely on nonpersonal information. Alternatively, any researchers using the pseudonymized data must go through the normal IRB/Privacy Board review process.

CONCLUSIONS AND RECOMMENDATIONS

Based on its review of the information described in this chapter, the committee agreed on an overarching principle to guide the formation of recommendations. The committee affirms the importance of maintaining and improving the privacy of health information. In the context of health research, privacy includes the commitment to handle personal information of patients and research participants with meaningful privacy protections, including strong security measures, transparency, and accountability.16 These commitments extend to everyone who collects, uses, or has access to personally identifiable health information of patients and research participants. Practices of security, transparency, and accountability take on extraordinary importance in the health research setting: Researchers and other data users should disclose clearly how and why personal information is being collected, used, and secured, and should be subject to legally enforceable obligations to ensure that personally identifiable information is used appropriately and securely. In this manner, privacy protection will help to ensure research participation and public trust and confidence in medical research.

As part of the process of implementing this principle into the federal oversight regime of health research, the committee recommends that all institutions in the health research community that are involved in the collection, use, and disclosure of personally identifiable health information should take strong measures to safeguard the security of health data. For example, institutions could:

  • Appoint a security officer responsible for assessing data protection needs and implementing solutions and staff training.

  • Make greater use of encryption and other techniques for data security.

  • Include data security experts on IRBs.

  • Implement a breach notification requirement, so that patients may take steps to protect their identity in the event of a breach.

  • Implement layers of security protection to eliminate single points of vulnerability to security breaches.

In addition, the federal government should support the development and use of:

  • Genuine privacy-enhancing techniques that minimize or eliminate the collection of personally identifiable data.

  • Standardized self-evaluations and security audits and certification programs to help institutions achieve the goal of safeguarding the security of personal health data.

Effective health privacy protections require effective data security measures. The HIPAA Security Rule (which entails a set of regulatory provisions separate from the Privacy Rule) already sets a floor for data security standards within covered entities, but not all institutions that conduct health research are subject to HIPAA regulations. Also, the survey data presented in this chapter show that neither the HIPAA Privacy Rule nor the HIPAA Security Rule have directly improved public confidence that personal health information will be kept confidential. Therefore, all institutions conducting health research should undertake measures to strengthen data protections. For example, given the recent spate of lost or stolen laptops containing patient health information, encryption should be required for all laptops and removable media containing such data. However, in general, given the differences among the missions and activities of institutions in the health research community, some flexibility in the implementation of specific security measures will be necessary.

Enhanced security would reduce the risk of data theft and reinforce the public’s trust in the research community by diminishing anxiety about the potential for unintentional disclosure of information. The publication of best practices and outreach to all stakeholders by HHS, combined with a cooperative approach to compliance with security standards, such as self-evaluation and audit programs, would promote progress in this area. Research sponsors could also play a roll in fostering the adoption of best practices in data security.

REFERENCES

  1. Aggarwal CC, Yu PS, editors. Privacy-preserving data mining: Models and algorithms. Boston, MA: Kluwer Academic Publishers; 2008.

  2. Allen A. Genetic privacy: Emerging concepts and values. In: Rothstein M, editor. Genetic secrets: Protecting privacy and confidentiality in the genetic era. New Haven, CT: Yale University Press; 1997. pp. 31–59.

  3. Balch GI, Doner L, Hoffman MK, Macario E. An exploration of how patients and family caregivers think about counterfeit drugs and the safety of prescription drug retail outlets for the National Health Council. Oak Park, IL: Balch Associates; 2005.

  4. Balch GI, Doner LMA, Hoffman MK, Merriman MP, Monroe-Cook E, Rathjen G. Concept and message development research on engaging communities to promote electronic personal health records for the National Health Council. Oak Park, IL: Balch Associates; 2006.

  5. Barrett G, Cassell JA, Peacock JL, Coleman MP. National survey of British public’s view on use of identifiable medical data by the National Cancer Registry. British Medical Journal. 2007;332(7549):1068–1072. [PMC free article: PMC1458550] [PubMed: 16648132]

  6. Bloustein E. Privacy as an aspect of human dignity: An answer to Dean Prosser. New York Law Review. 1967;39:34.

  7. Bodger JA. Note, taking the sting out of reporting requirements: Reproductive health clinics and the constitutional right to informational privacy. Duke Law Journal. 2006;56:583–609. [PubMed: 17302004]

  8. Burkert H. Privacy-enhancing technologies: Typology, critique, vision. In: Agre PE, Rotenberg M, editors. Technology and privacy: The new landscape. Cambridge, MA: The MIT Press; 2001. pp. 125–142.

  9. Claerhout B, De Moor GJE. Privacy protection for clinical and genomic data: The use of privacy-enhancing techniques in medicine. Journal of Medical Informatics. 2005;74:257–265. [PubMed: 15694632]

  10. Conn J. CMS’ HIPAA watchdog presents potential conflict. Modern Healthcare. 2008. [accessed July 28, 2008]. http://www​.modernhealthcare.com .

  11. Damschroder LJ, Pritts JL, Neblo MA, Kalarickal RJ, Creswell JW, Hayward RA. Patients, privacy and trust: Patients’ willingness to allow researchers to access their medical records. Social Science & Medicine. 2007;64(1):223–235. [PubMed: 17045717]

  12. De Moor GJE, Claerhout B, De Meyer F. Privacy enhancing techniques: The key to secure communication and management of clinical and genomic data. Methods of Information in Medicine. 2003;42:148–153. [PubMed: 12743651]

  13. Dwork CS. An ad omnia approach to defining and achieving private data analysis, proceedings of the first sigkdd international workshop on privacy, security, and trust in kdd (invited). Lecture Notes in Computer Science. 2008:4890.

  14. Feld AD, Feld AD. The Health Insurance Portability and Accountability Act (HIPAA): Its broad effect on practice. American Journal of Gastroenterology. 2005;100(7):1440–1443. [PubMed: 15984962]

  15. Flannery J, Tokley J. AMA poll shows patients are concerned about the privacy and security of their medical records. Australian Medical Association; 2005. [accessed December 10, 2007]. http://www​.ama.com.au/web​.nsf/doc/WEEN-6EG7LY .

  16. Forrester Research. National survey: Confidentiality of medical records. 1999. [accessed February 12, 2007]. http://www​.chcf.org .

  17. Fried C. Privacy. Yale Law Journal. 1968;77:475–493.

  18. GAO (Government Accountability Office). Personal information: Data breaches are frequent, but evidence of resulting identity theft is limited. Washington, DC: GAO; 2007.

  19. GAO. Although progress reported, federal agencies need to resolve significant deficiencies: Statement of Gregory C. Wilshusen, Director, Information Security Issues. Washington, DC: GAO; 2008a.

  20. GAO. Information security: Progress reported, but weaknesses at federal agencies persist: Statement of Gregory C. Wilshusen, Director, Information Security Issues. Washington, DC: GAO; 2008b.

  21. Gavison R. Privacy and the limits of the law. Yale Law Journal. 1980;89:421–471.

  22. Goldman J. Protecting privacy to improve health care. Health Affairs. 1998;17(6):47–60. [PubMed: 9916354]

  23. Gostin LO. Health information privacy. Cornell Law Review. 1995;80:101–184. [PubMed: 11660159]

  24. Gostin L. Health information: Reconciling personal privacy with the public good of human health. Health Care Analysis. 2001;9:321. [PubMed: 11794835]

  25. Gostin L. Public health law: Power, duty, restraint. Berkeley, CA: University of California Press; 2008. Surveillance and public health research: Personal privacy and the “right to know.”

  26. Gostin LO, Hodge JG. Personal privacy and common goods: A framework for balancing under the national health information Privacy Rule. Minnesota Law Review. 2002;86:1439. [PubMed: 15174439]

  27. HEW (Department of Health, Education and Welfare). Records, computers and the rights of citizens: Report of the Secretary’s Advisory Committee on Automated Personal Data Systems. 1973. [accessed July 12, 2008]. http://aspe​.hhs.gov/datacncl​/1973privacy/tocprefacemembers​.htm .

  28. Hodge JG Jr, Gostin LO, Jacobson PD. Legal issues concerning electronic health information: Privacy, quality, and liability. JAMA. 1999;282(15):1466–1471. [PubMed: 10535438]

  29. Hudson KL. Prohibiting genetic discrimination. New England Journal of Medicine. 2007;356:2021. [PubMed: 17507700]

  30. IOM (Institute of Medicine). Protecting data privacy in health services research. Washington, DC: National Academy Press; 2000. [PubMed: 25057723]

  31. Kass NE, Natowicz MR, Hull SC, Faden RR, Plantinga L, Gostin LO, Slutsman J. The use of medical records in research: What do patients want? Journal of Law, Medicine & Ethics. 2003;31:429–433. [PMC free article: PMC4816216] [PubMed: 14626550]

  32. Low L, King S, Wilkie T. Genetic discrimination in life insurance: Empirical evidence from a cross sectional survey of genetic support groups in the United Kingdom. British Medical Journal. 1998;317:1632–1635. [PMC free article: PMC28743] [PubMed: 9848905]

  33. Lowrance WW. Privacy and health research: A report to the U.S. Secretary of Health and Human Services. 1997. [accessed May 10, 2008]. http://aspe​.hhs.gov/DATACNCL/PHR.htm .

  34. Lowrance WW. Learning from experience, privacy and the secondary use of data in health research. London: The Nuffield Trust; 2002. [PubMed: 15072055]

  35. Magnussen R. The changing legal and conceptual shape of health care privacy. The Journal of Law, Medicine & Ethics. 2004;32:681. [PubMed: 15807356]

  36. Moore A. Intangible property: Privacy, power and information control. In: Moore A, editor. Information ethics: Privacy, property, and power. Seattle, WA: University of Washington Press; 2005.

  37. NBAC (National Bioethics Advisory Commission). Research involving human biological materials: Ethical issues and policy guidance, report and recommendations. Vol. 1. Rockville, MD: NBAC; 1999.

  38. NBAC. Ethical and policy issues in research involving human participants. Rockville, MD: NBAC; 2001.

  39. Nissenbaum H. Privacy as Contextual Integrity. Washington Law Review. 2004;79:101–139.

  40. NRC (National Research Council). Improving access to and confidentiality of research data: Report of a workshop. Washington, DC: National Academy Press; 2000.

  41. NRC. Who goes there?: Authentication through the lens of privacy. Washington, DC: The National Academies Press; 2003.

  42. NRC. Expanding access to research data: Reconciling risks and opportunities. Washington, DC: The National Academies Press; 2005.

  43. NRC. Engaging privacy and information technology in a digital age. Washington, DC: The National Academies Press; 2007a.

  44. NRC. Privacy and information technology in a digital age. Washington, DC: The National Academies Press; 2007b.

  45. NRC. Putting people on the map: Protecting confidentiality with linked social-spatial data. Washington, DC: The National Academies Press; 2007c.

  46. OIG (Office of Inspector General). Nationwide review of the Centers for Medicare & Medicaid Services Health Insurance Portability and Accountability Act of 1996 oversight. Washington, DC: Department of Health and Human Services; 2008.

  47. OTA (Office of Technology Assessment). Protecting privacy in computerized medical information. Washington, DC: OTA; 1993.

  48. Petrila J. Medical records confidentiality: Issues affecting the mental health and substance abuse systems. Drug Benefit Trends. 1999;11:6–10.

  49. Post R. Three concepts of privacy. Georgetown Law Journal. 2001;89:2087–2089.

  50. Pritts JL. Altered states: State health privacy laws and the impact of the federal health Privacy Rule. Yale Journal of Health Policy, Law & Ethics. 2002;2(2):327–364. [PubMed: 12669317]

  51. Pritts J. The importance and value of protecting the privacy of health information: Roles of HIPAA Privacy Rule and the Common Rule in health research. 2008. [accessed March 15, 2008]. http://www​.iom.edu/CMS/3740/43729/53160​.aspx .

  52. Rachels J. Why privacy is important. Philosophy and Public Affairs. 1975;4:323–333.

  53. Regan P. Legislating privacy: Technology, social values, and public policy. Chapel Hill, NC: University of North Carolina Press; 1995.

  54. Research!America. America speaks: Poll summary. Vol. 7. Alexandria, VA: United Health Foundation; 2007.

  55. Richards NM, Solove DJ. Privacy’s other path: Recovering the law of confidentiality. Georgetown Law Journal. 2007;96:124.

  56. Roback H, Shelton M. Effects of confidentiality limitations on the psychotherapeutic process. Journal of Psychotherapy Practice and Research. 1995;4:185–193. [PMC free article: PMC3330397] [PubMed: 22700249]

  57. Robling MR, Hood K, Houston H, Pill R, Fay J, Evans HM. Public attitudes towards the use of primary care patient record data in medical research without consent: A qualitative study. Journal of Medical Ethics. 2004;30:104–109. [PMC free article: PMC1757117] [PubMed: 14872086]

  58. Saver R. Medical research and intangible harm. University of Cincinnati Law Review. 2006;74:941–1012.

  59. Solove DJ. A taxonomy of privacy. University of Pennsylvania Law Review. 2006;154:516–518.

  60. Taube DO, Elwork A. Researching the effects of confidentiality law on patients’ self-disclosures. Professional Psychology: Research and Practice. 1990;21:72–75. [PubMed: 12186093]

  61. Taylor C. Sources of the self: The making of modern identity. Cambridge, MA: Harvard University Press; 1989.

  62. Terry NP, Francis LP. Ensuring the privacy and confidentiality of electronic health records. University of Illinois Law Review. 2007;2007(2):681–736.

  63. Tourangeau R, Rips LJ, Rasinski K. The psychology of survey response. Cambridge, UK: Cambridge University Press; 2000.

  64. Turn R, Ware WH. The RAND Paper Series. Santa Monica, CA: The RAND Corporation; 1976. Privacy and security issues in information systems.

  65. Wentland EJ. San Diego, CA: Academic Press; 1993. Survey responses: An evaluation of their validity.

  66. Westin A. Science, privacy and freedom. Columbia Law Review. 1966;66(7):1205–1253.

  67. Westin A. Privacy and freedom. New York: Atheneum; 1967.

  68. Whiddett R, Hunter I, Engelbrecht J, Handy J. Patients’ attitudes towards sharing their health information. International Journal of Medical Informatics. 2006;75(7):530–541. [PubMed: 16198142]

  69. Willison DJ, Schwartz L, Abelson J, Charles C, Swinton M, Northrup D, Thabane L. Alternatives to project-specific consent for access to personal information for health research. What do Canadians think?. Paper presented at 29th International Conference of Data Protection and Privacy Commissioners; Montreal, Canada. September 25–28. 2007.

  70. Woolley M, Propst SM. Public attitudes and perceptions about health related research. JAMA. 2005;294:1380–1384. [PubMed: 16174697]

1

Sections of this chapter were adapted from a background paper by Pritts (2008).

2

The National Committee on Vital and Health Statistics has noted that the term “secondary uses” of health data is ill defined and therefore urged abandoning it in favor of precise description of each use. Consequently, the IOM committee has chosen to minimize use of the term in this report.

3

The ethical principle of doing no harm, based on the Hippocratic maxim, primum non nocere, first do no harm.

4

The Genetic Information Nondiscrimination Act of 2008 establishes some protections to prevent discrimination based on a patient’s genetic background.

5

The “Common Rule” is the term used by 18 federal agencies who have adopted the same regulations governing the protection of human subjects of research. See Chapter 3 for a detailed description of the rule.

6

These surveys were undertaken by a wide range of sponsors (Markle Foundation, Equifax, Institute for Health Freedom, Geneforum, Privacy Consulting Group) and a wide range of surveyors (Harris Interactive, Public Opinion Strategies, Genetics and Public Policy Center).

7

The survey was conducted online by Harris Interactive between September 11 and 18, 2007, with 2,392 respondents. The methodology for the survey is described in Appendix B.

8

Health Insurance Portability and Accountability Act, Public Law 104–191 (1996) (most relevant sections codified at 42 U.S.C. §§ 1320(d)–1320(d)(8).

9

The original 1973 HEW Advisory Committee contemplated and rejected the creation of a centralized, federal approach to regulating the use of all automated personal data systems (see HEW, 1973).

10

Europe, in contrast, has adopted fair information practices in a broad, more uniform fashion by incorporating them into the European Union (EU) Directive, which protects individuals with regard to the processing of any personal data and on the free movement of such data. The EU Directive applies to personal data of many types, including medical and financial, and widely applies to all who process such data, resulting in protections (Gelman, 2008).

11

Security Standards, 45 C.F.R. parts 160, 162, and 164 (2003). The final standards were adopted on February 20, 2003. Covered entities were required to be in compliance with the regulation on April 21, 2005 (and April 21, 2006, for small health plans).

12

Protected health information (PHI) refers to all personally identifiable health information maintained by a HIPAA covered entity. 45 C.F.R. § 160.103 (2002).

13

Since 2004, the American Health Information Management Association has annually surveyed health care privacy officers and others whose jobs related to the HIPAA privacy function to gain an understanding of where health care organizations stand with regard to implementing the Privacy and Security Rules required by HIPAA (AHIMA, 2006).

1415

Standards for Privacy of Individually Identifiable Health Information: Final Rule, 67 Fed. Reg. 53182, 53232 (2002).

16

This is derived from the principles of fair information practices (see Chapter 2 for more detail).