Posted on February 5, 2020 at 2:29 PM
Technology, and technology-caused change, is pervasive in health care. Hospitals and technology companies realize that there is big money in using technology to find information in patient and medical staff data. Companies are rushing to cash in. The Food and Drug Administration has approved more than 40 artificial intelligence-based products for use in medicine, and there are many more deployed or in development that don’t need FDA approval. Tens of thousands of medical phone apps are tracking patients and gathering detailed medical information about them. These new technologies bring new ethical questions that hospitals and other health care organizations are poorly equipped to answer.
The unique relationship between doctors and patients requires trust built by the ethical care of patients and family. One of the tools for protecting the doctor-patient relationship and the reputation of the health care industry is the hospital clinical ethics committee. Ethics committee members work with patients, families, and hospital staff to find ethical solutions to complex medical cases. Most ethics cases deal with clinical questions, but don’t address largescale concerns about the effects of technology on medical care and the hospital culture.
Technology has moved beyond life-sustaining treatments such as dialysis, which were the catalysts for creating the first ethics committees. More and more, the major technology-based ethical questions revolve around normative issues emerging from the gathering and analysis of data, and the use of AI. Those issues include concerns that technology is biased, increases wealth and power inequalities, and erodes the human bonds that create a worthwhile life. The impact of technology is not limited to patients. Technology has also changed the day-to-day experience of working in health care. An example of this is how electronic medical records have changed physician interactions and relationships with patients.
We have a choice. Technology in health care can continue to move fast and break things, including breaking the trust between patients, family, and staff. Or health care organizations can start considering technology holistically, including assessing its ethical impact.
I propose that the ethical analysis of technology be done by a technology ethics committee. This committee would not replace the clinical ethics committees or IRBs but would work with them as needed. Here are some examples of the questions a technology ethics committee might be called upon to examine:
- Should we use this? This is the first question to ask in considering any new technology. In other words, is the new app or algorithm ultimately beneficial to patients and clinicians.
- For a predictive algorithm, what type of patient consent is ethical? How can consent be gathered? Should opt-in or opt-out be the default?
- Is an algorithm that measures patient health biased?
- Who should have access to AI-generated data and patient identities? When and under what conditions?
- Does a project designed to help patients change unhealthy behaviors—one that uses psychological targeting (for example, “extracting people’s psychological profiles from their digital footprints”)—respect patient autonomy?
- Does a project using iPhone apps raise health equity concerns, as the advantages are not equally available to low-income and high-income patients? Does the phone’s surveillance capabilities, combined with the data the app gathers, put patients at risk?
- Under what circumstances is tracking a staff member’s location within a hospital ethical and not just legal?
- How does the hospital or health care system detect unintended consequences of a technology? How should the organization respond?
These questions are just a small sample of the ethical questions that arise when a technology becomes part of health care. Technology ethicists are starting to see patterns of risk, benefit, justice, and autonomy that impact patients and staff. If technology is not ethical, creating and deploying it can become a risk to the hospital, increase costs (including reputational costs), reduce the quality of the patient experience, and destroy trust and cooperation between patients and staff as well as within the medical team itself.
Addressing these issues requires an assessment of the ethical consequences of using a technology. This assessment takes understanding the technology in addition to understanding the ethical issues it poses. I believe a technology ethics committee, building upon the successful model of the clinical ethics committee, can address these concerns.
Alan Cossitt is a board-certified hospital chaplain who spent 25 years developing various technologies, including one of the first commercial neural networks.
The post Why Health Care Organizations Need Technology Ethics Committees appeared first on The Hastings Center.