On Explainable AI, Trade Secrets, and Bread

Author

Daniel Tigard

Publish date

Tag(s): Legacy post
Topic(s): Science Social Justice Technology

by Daniel W. Tigard, Ph.D.

I frequent a bakery in my neighborhood, every two or three days. I buy my bread—usually the same sort, but sometimes I switch things up. I consume it and enjoy it. End of story.

I don’t know exactly how bread is made. I believe it involves yeast, which I understand is alive in some sense. If I really wanted to know more, I suppose I could ask the baker. But he would most likely give me an agitated look and move on to sell his goods to the next customer. He might even outright refuse. After all, this bread is among the best in town, and revealing how he makes it—its ingredients, the precise oven temperature, and so on—may well jeopardize the continuing success of the bakery. Importantly, as a society, we protect the baker’s “trade secrets.” To some extent anyway, we allow him to refuse to explain himself to every nosy customer—as long as basic health and safety codes are upheld, federal regulations are observed concerning information on allergens, and so on.

When it comes to technological trade secrets, however, particularly the development of artificial intelligence (AI), it seems we’re not yet sure if we should allow the same sorts of liberties. We might want to protect companies, say, by granting trade secret status to AI technologies on the basis of intellectual property. Yet, more and more, we are demanding transparency from technology, whether the demands are made in the form of hard law or internal corporate self-regulation. Although there’s reason to think even the creators cannot fully understand how AI works—and that machine learning will remain a ‘black box’—we’re increasingly calling for the systems that use our data to be “explainable.” Indeed, the EU’s General Data Protection Regulation appears to support the right to receive explanations. Why is this? What exactly are the relevant differences between the ingredients and processes behind the bread I consume, and the ingredients and processes behind the technologies I use?

Art by Craig Klugman

One immediate intuition is that AI technologies stand to cause me a great deal of harm. Algorithms at work in the world today might cause me to be denied a bank loan or turned down on a job application; they might separate me from my children, send me to prison, influence my shopping behavior with targeted ads, or nudge my voting preferences. These are just some of the examples circulating in recent discussions on technology ethics. Indeed, the cases should grab our attention and force us to think critically about the fact that harmful human biases are showing up in our technological creations. But, again, what exactly is unique about these harms? It seems that, if basic health codes are not upheld, my local baker is likewise capable of causing me a significant degree of harm, perhaps even more directly and immediately. So, why might it be completely reasonable to demand explainability from the creators of AI technologies, while it remains nothing more than agitating to demand explainability from the creator of my bread?

There are likely numerous differences to be drawn—in levels of complexity, for example, or potential for intelligence—as well as numerous conceptions of “explainability” lurking in our discussions of AI regulation. A key difference I want to briefly draw upon here is the idea that we, as a society, are likely far from settled on what we want from AI technologiesemerging today. Our ambivalence may be a result of not knowing—or worse, not caring—about the extent of AI’s increasing capacity to disrupt our lives. Assuming for the moment we do know and do care about the potential impact of AI, in short, the question can be posed as: Which social goods do we want AI to promote? Do we, for example, want to increasingly deploy AI and robotic systems in dirty, dull, and dangerous workplaces? Do we want to keep AI out of particularly sensitive domains, like warfare, healthcare, or sexual relationships? No doubt, we will disagree, often sharply, on the most appropriate domains of application for AI technologies. And these simple but crucial ideas of personal ambivalence and collective disagreement, I suggest, can help us to understand why the demand for explainability in AI should be an opportunity to work toward clarity, and perhaps collective action, concerning what we want from emerging technologies.

When it comes to far simpler social goods, publicly available products and services, we have relatively clear ideas of what we want as individual consumers. We probably even agree, largely if not unanimously, on what we want from the producers of simple consumer goods. From local bakers, we want good bread. Perhaps we also wish for it to be sold at a reasonable price, for the production to be environmentally sustainable, and so on. But you get the idea. We usually know what we want individually and, to a large extent, we agree on what we want collectively, as consumers. In turn, these elementary features help to create shared goals that can drive the production and regulation of simple goods and services, as well as more complex ones. Bakeries aside, we can here consider domains like healthcare and education. We know what we want; we largely agree; and the goals of those sorts of institutions are quite clear. For these reasons, the regulation of them can be relatively straightforward, legally and ethically. In medicine, for example, we have a longstanding tradition of applying four basic principles: respect for autonomy, beneficence, non-maleficence, and justice. Sure, from time to time, we might disagree on their interpretation; some principles will occasionally need to take precedence over others; and so on. But generally, we can safely rely upon basic principles supported by shared goals.

In a recent perspective article in Nature Machine Intelligence, Brent Mittelstadt, research fellow at the Oxford Internet Institute, argued that one of the key difficulties in regulating AI is that the development of AI as a whole is lacking common goals. It’s not like medicine, where hopefully all practitioners aim at promoting the health and well-being of patients. It appears even further removed from simpler goods and services, like bakers aiming to create good bread. The lack of shared goals maintained by AI developers, however, merely punctuates the idea that we too are ambivalent. Collectively, as consumers of AI technologies, we likely disagree—at least where we know and care enough—about how AI systems are and should be impacting our lives. And here, it seems, is where we find a pivotal difference in the ingredients and processes behind AI technologies, one that reasonably calls for some kind of explainability.

It may be that the demand for explainable AI is a threat to tech developers’ trade secrets. For that matter, when asking what exactly is behind the decisions resulting from our use of today’s technologies, our inquiry is likely to be met with a degree of agitation, or even outright refusal. If nothing else, perhaps in time, we’ll come to better understand what we want from AI technologies. But first, awareness of the opportunities and risks must be raised widely. As a global community, we must formulate and reflect on our values, and openly express our concerns. International platforms for promoting AI education and collaborative policy forums can serve as springboards. Once we begin to understand and care enough about the impact of AI, it seems we’ll be much better positioned to agree upon and assure that some basic health and safety codes are upheld.

 


This posting was inspired by discussion in a technology ethics reading group at RWTH Aachen University – I’m grateful for all participants’ contributions. For comments, I also thank Katharina Hammler and the editors at Bioethics.net.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.