Get Published | Subscribe | About | Write for Our Blog    

News RSS Bioethics News.

09/26/2020

When AI in healthcare goes wrong, who is responsible?

External Link - Source: Quartz

“Data encompasses problems that can occur when machine learning is trained on bias…AI applications in healthcare often involve robots working with humans.” AI can be trained on information that may further propagate discrimination on the basis of race and machine learning. Where does the blame fall when these applications do more harm than good? Can AI effectively be trained in a way that does not introduce more disparities in healthcare?

This entry was posted in Health Care, Health Disparities, Science, Technology and tagged . Bookmark the permalink.