Get Published | Subscribe | About | Write for Our Blog    

Posted on February 3, 2020 at 10:03 AM

What is
Artificial Intelligence? This central question has captivated the minds of
specialists – mathematicians, computer scientists, cognitive scientists, and
the like – and passive observers since the days of Alan Turing and John von
Neumann. In this discussion I will distinguish between three types of
Artificial Intelligence – human level, superhuman, and domain specific. Through
this exercise I hope to shed light on the difficulties in conceptually defining
the term Artificial Intelligence, as well as dispel misconceptions about the
state of the art in Artificial Intelligence. To what end? I hope that this blog
will spark a discussion about the ethics of today’s Artificial Intelligence,
considered in light of tomorrow’s Artificial Intelligence.

 We will start at the beginning, with Alan
Turing’s definition of human level artificial intelligence. Turing’s famous
test, popularized by the 2014 movie The
Imitation Game
, is a test of a machine’s ability to exhibit intelligent
behavior comparable to, or indistinguishable from, human-level intellect. In
doing so, the test pits human against machine. The test involves three players,
two human and one machine, each of whom are separated from the other players.
Once separated, players are tasked with holding a conversation with their
counterparts. One human player, the evaluator, is tasked with determining which
of the other players is a human and which is a machine. The evaluator knows
that just one of his conversational partners is human. With that knowledge, if
the evaluator cannot reliably distinguish between the machine and the human,
then the machine passes the Turing test. Such a machine would be said to
possess human level intellect.

For the sake of
argument, let’s say that a machine exists with human level intellect. In such a
case the machine would have necessarily been created by humans. It is
tautological that, since human level intellect engineered human level
intellect, then human level intellect is capable of engineering human level
intellect. From this it follows that, once a machine possesses human level
intellect, it should also be able to engineer human level intellect.
Furthermore, in our hypothetical, the creation of human level intellect would
have been an iterative process comprised of repetitive attempts, failures, and
modifications to realize progressively greater intelligence. From this it
follows that, a machine with human level intellect could also engineer
intelligent machines through an iterative process of repetitive attempts, failures,
and modifications to realize progressively greater intelligence. The difference
is that machines could do this process many orders of magnitude faster than
humans, thereby enabling them to quickly advance human level intelligence into
superhuman intelligence – an entirely new class of intelligence that humans
could neither match, nor understand.

Now let’s bring
our discussion back to reality. As it stands, humans have developed relatively
sophisticated Artificial Intelligence, especially in healthcare. Humans have
developed AI that is capable of outperforming physicians in the predicting of
psychosis onset in patients with prodromal syndrome,[1]
and finding a higher percentage of clinically actionable therapeutic options
for cancer patients.[2]
Most recently, on January 1, 2020, researchers working at Google’s AI lab, Deep
Mind, published a journal article describing a new AI based healthcare tool
capable of surpassing human physicians in the diagnosis of breast cancer.[3]
The paper, published in Nature, claims
it reduces false negatives by up to 9.4% and false positives by up to 5.7%.[4]
Further, when pitted against six human radiologists Google’s diagnostic AI
outperformed all of them.

These types of
systems, which can outperform humans at a single task, or a single domain, are known
as domain specific AI. Hierarchically, domain specific AI is the least
sophisticated of the AI types discussed herein. Irrespective of that fact,
domain specific AI is currently the state of the art and can be an extremely
powerful tool, within its specific domain, or suite of tools, across their
specific domains. Accordingly, we will begin our ethical discussion here. There
are a number of ethical conflicts with regard to domain specific AI, each with
sufficient depth to merit its own blog. In the interest of brevity, we only
examine one, the tension between justice,
the equitable distribution of benefits and burdens in society, and non-maleficence, a physician’s duty to
do no harm.

In all of the
foregoing examples of domain specific AI – breast cancer diagnosis, psychosis
diagnosis, and percent of therapeutic options – the AI outperforms physicians.
In domains where a physician diagnostician is less effective or accurate than
an AI diagnostician, then a physician diagnosing a given patient is doing
relative harm to that patient. Over a sufficiently large sample size, the
physician will make mistakes that the AI would not, mistakes that impact lives.
From the perspective of non-maleficence, the physician arguably has an affirmative
duty to cede responsibility to the AI.

On the other
hand, a major contemporary problem in the development of AI is the incidence of
biased decision-making. In October 2019, the journal Science published the results of a UC Berkley team’s analysis of
over 50,000 medical records.[5]
The results showed racial bias in a predictive algorithm used by many United
States health providers to determine the patients most in need of extra medical
care.[6]
Though the researchers did not specify the specific algorithm, the Washington
Post reported it to be Optum by United Health, a tool that impacts more than 70
million lives.[7]
To compound the issue, the Berkley team identified the same bias inducing flaw
in ten other widely used healthcare algorithms.[8]
From the perspective of justice, physicians should not use AI that perpetuates
and systematizes discriminatory biases.

Numerous questions
manifest as a consequence of the ethical conflict between justice and
non-maleficence. Do we use AI because not using it may harm the patient? Do we
refrain from using AI unless we can be sure it is unbiased? Do we meet
somewhere in the middle, or does one side win out? In my opinion, these
questions, like many of our greatest questions in the 21st century,
require interdisciplinary collaboration. Computer scientists to advise on ways
to resolve biases and their likelihood of success, statisticians to calculate
the actual impact of the benefits and drawbacks of these technologies, lawyers
to balance the equities. Of course, it is incumbent upon the bioethicists to
devise the ultimate answers to these questions, if that is possible.
Nonetheless, it would be wise to use expert counsel from other disciplines in
doing so.

I will leave you
all with a hypothetical. If we accept the premise that AI with human level
intellect would quickly develop AI with superhuman intellect, does that change
the ethical calculus for their predecessors, the domain specific AI of today? Should
justice be prioritized so as to not encode biases into the intellectually
transcendent AI of the future? Should autonomy become less of a priority
because AI will do much of the decision-making anyway? With more questions than
answers the ethics of AI in healthcare, and also generally, is a field ripe for
discourse that I urge you all to take part in.


Posted on

What is
Artificial Intelligence? This central question has captivated the minds of
specialists – mathematicians, computer scientists, cognitive scientists, and
the like – and passive observers since the days of Alan Turing and John von
Neumann. In this discussion I will distinguish between three types of
Artificial Intelligence – human level, superhuman, and domain specific. Through
this exercise I hope to shed light on the difficulties in conceptually defining
the term Artificial Intelligence, as well as dispel misconceptions about the
state of the art in Artificial Intelligence. To what end? I hope that this blog
will spark a discussion about the ethics of today’s Artificial Intelligence,
considered in light of tomorrow’s Artificial Intelligence.

 We will start at the beginning, with Alan
Turing’s definition of human level artificial intelligence. Turing’s famous
test, popularized by the 2014 movie The
Imitation Game
, is a test of a machine’s ability to exhibit intelligent
behavior comparable to, or indistinguishable from, human-level intellect. In
doing so, the test pits human against machine. The test involves three players,
two human and one machine, each of whom are separated from the other players.
Once separated, players are tasked with holding a conversation with their
counterparts. One human player, the evaluator, is tasked with determining which
of the other players is a human and which is a machine. The evaluator knows
that just one of his conversational partners is human. With that knowledge, if
the evaluator cannot reliably distinguish between the machine and the human,
then the machine passes the Turing test. Such a machine would be said to
possess human level intellect.

For the sake of
argument, let’s say that a machine exists with human level intellect. In such a
case the machine would have necessarily been created by humans. It is
tautological that, since human level intellect engineered human level
intellect, then human level intellect is capable of engineering human level
intellect. From this it follows that, once a machine possesses human level
intellect, it should also be able to engineer human level intellect.
Furthermore, in our hypothetical, the creation of human level intellect would
have been an iterative process comprised of repetitive attempts, failures, and
modifications to realize progressively greater intelligence. From this it
follows that, a machine with human level intellect could also engineer
intelligent machines through an iterative process of repetitive attempts, failures,
and modifications to realize progressively greater intelligence. The difference
is that machines could do this process many orders of magnitude faster than
humans, thereby enabling them to quickly advance human level intelligence into
superhuman intelligence – an entirely new class of intelligence that humans
could neither match, nor understand.

Now let’s bring
our discussion back to reality. As it stands, humans have developed relatively
sophisticated Artificial Intelligence, especially in healthcare. Humans have
developed AI that is capable of outperforming physicians in the predicting of
psychosis onset in patients with prodromal syndrome,[1]
and finding a higher percentage of clinically actionable therapeutic options
for cancer patients.[2]
Most recently, on January 1, 2020, researchers working at Google’s AI lab, Deep
Mind, published a journal article describing a new AI based healthcare tool
capable of surpassing human physicians in the diagnosis of breast cancer.[3]
The paper, published in Nature, claims
it reduces false negatives by up to 9.4% and false positives by up to 5.7%.[4]
Further, when pitted against six human radiologists Google’s diagnostic AI
outperformed all of them.

These types of
systems, which can outperform humans at a single task, or a single domain, are known
as domain specific AI. Hierarchically, domain specific AI is the least
sophisticated of the AI types discussed herein. Irrespective of that fact,
domain specific AI is currently the state of the art and can be an extremely
powerful tool, within its specific domain, or suite of tools, across their
specific domains. Accordingly, we will begin our ethical discussion here. There
are a number of ethical conflicts with regard to domain specific AI, each with
sufficient depth to merit its own blog. In the interest of brevity, we only
examine one, the tension between justice,
the equitable distribution of benefits and burdens in society, and non-maleficence, a physician’s duty to
do no harm.

In all of the
foregoing examples of domain specific AI – breast cancer diagnosis, psychosis
diagnosis, and percent of therapeutic options – the AI outperforms physicians.
In domains where a physician diagnostician is less effective or accurate than
an AI diagnostician, then a physician diagnosing a given patient is doing
relative harm to that patient. Over a sufficiently large sample size, the
physician will make mistakes that the AI would not, mistakes that impact lives.
From the perspective of non-maleficence, the physician arguably has an affirmative
duty to cede responsibility to the AI.

On the other
hand, a major contemporary problem in the development of AI is the incidence of
biased decision-making. In October 2019, the journal Science published the results of a UC Berkley team’s analysis of
over 50,000 medical records.[5]
The results showed racial bias in a predictive algorithm used by many United
States health providers to determine the patients most in need of extra medical
care.[6]
Though the researchers did not specify the specific algorithm, the Washington
Post reported it to be Optum by United Health, a tool that impacts more than 70
million lives.[7]
To compound the issue, the Berkley team identified the same bias inducing flaw
in ten other widely used healthcare algorithms.[8]
From the perspective of justice, physicians should not use AI that perpetuates
and systematizes discriminatory biases.

Numerous questions
manifest as a consequence of the ethical conflict between justice and
non-maleficence. Do we use AI because not using it may harm the patient? Do we
refrain from using AI unless we can be sure it is unbiased? Do we meet
somewhere in the middle, or does one side win out? In my opinion, these
questions, like many of our greatest questions in the 21st century,
require interdisciplinary collaboration. Computer scientists to advise on ways
to resolve biases and their likelihood of success, statisticians to calculate
the actual impact of the benefits and drawbacks of these technologies, lawyers
to balance the equities. Of course, it is incumbent upon the bioethicists to
devise the ultimate answers to these questions, if that is possible.
Nonetheless, it would be wise to use expert counsel from other disciplines in
doing so.

I will leave you
all with a hypothetical. If we accept the premise that AI with human level
intellect would quickly develop AI with superhuman intellect, does that change
the ethical calculus for their predecessors, the domain specific AI of today? Should
justice be prioritized so as to not encode biases into the intellectually
transcendent AI of the future? Should autonomy become less of a priority
because AI will do much of the decision-making anyway? With more questions than
answers the ethics of AI in healthcare, and also generally, is a field ripe for
discourse that I urge you all to take part in.

Comments are closed.