Blog RSSBlog.

11/07/2017

Is Medical Artificial Intelligence Ethically Neutral?

Will Knight has written several articles over this past year in MIT’s flagship journal Technology Review that have discussed growing concerns in the field of Artificial Intelligence (AI) that may be of concern for bioethicists. The first concern is in the area of bias. In an article entitled “Forget Killer Robots – Bias is the Real AI Danger”, Knight provides real world examples of this hidden bias affecting people negatively. One example is an AI system called COMPASS, which is used by judges to determine the likelihood of reoffending by inmates who are up for parole. An independent review claims that algorithm may be biased against minorities. In a separate article, Knight identified additional examples in other AI algorithms that introduced gender or minority bias in software used to rank teachers, approve bank loans and interpret natural language processing. None of these examples argued that this bias was introduced intentionally or maliciously (though that certainly could happen).

This is where Knight’s the second concern becomes apparent. The problem may be that the algorithms are too complex for even their programmers to retroactively examine for bias. To understand the complexity issue, one must have an introductory idea of how the current AI programs work. Previously, computer programs had their algorithms “hard-wired” so to speak. The programs were essentially complex “if this, then do this” sequences. A programmer could look at the code and generally understand how the program would react to a given input. Beginning in the 1980’s, programmers started experimenting with code written to behave like a brain neuron might behave. The goal of the program was to model a human neuron, including the ability of the neuron to change its output behavior in real time. A neurobiologist would recognize the programming pattern as modeling the many layers of neurons in the human brain. A biofeedback expert would recognize the programming pattern as including feedback to change the input sensitivities based upon certain output goals – “teaching” the program to recognize a face or image in a larger picture is one such example. If you want to dive deep here, begin with this link.

This type of programming had limited use in the 1980s because the computers were too simple and could only model simple neurons and only a limited number at one time. Fast forward to the 21st century and 30 years of Moore’s Law of exponential growth in computing power and complexity, and suddenly, these neural networks are modeling multiple layers with millions of neurons. The programs are starting to be useful in analyzing complex big data and finding patterns (literally, a needle in a haystack) and this is becoming useful in many fields, including medical diagnosis and patient management. The problem is that even the programmers cannot simply look at these programs and explain how the programs came to their conclusions.

Why is this important to consider from a bioethics standpoint? Historically, arguments in bioethics could generally be categorized as consequentialist, deontological, virtue, hedonistic, divine command, etc… One’s stated position was open to debate and analysis, and the ethical worldview was apparent. A proprietary, cloud-based, black-box, big data neural network system making a medical decision obscures, perhaps unintentionally, the ethics behind the decision. The “WHY” of a medical decision is as important as the “HOW”. What goes in to a medical decision often includes ethical weighting that ought to be as transparent as possible. These issues are presently not easily examined in AI decisions. The bioethics community therefore needs to be vigilant as more medical decisions begin to rely on AI. We should welcome AI as another tool in helping us provide good healthcare. Given the above concerns regarding AI bias and complexity, we should not however simply accept AI decisions as ethically neutral.

Comments are closed.