Subject :

Why AI

Date :

5.11.21

Services : 

Healthcare and Medtech

AI-Regulations

What’s all the fuss about trust and reliable AI?

Laws and ethics labels do not solve the issue of trust. However, by requiring trustworthy AI, they promote accountability and the proper functioning of society.

As you have probably heard or read, in April the European Commission presented a proposal to regulate artificial intelligence (AI), with the aim of enabling the EU to reap the benefits of this technology. In total, the words “trust” and “reliability” appear more than 100 times in the proposal. This raises the question: do we really understand the role of trust in relation to AI?

Whether it is the Swiss Digital Trust Label, Trust Valley, or the Center for Digital Trust, the topic of trust and digital is in vogue, including in Switzerland. These initiatives suggest that trust is a catalyst for successful AI deployments. And yet, Professor Joanna Bryson believes that: “No one should trust artificial intelligence”. It makes you wonder about all the fuss about trust in AI or the digital world. So will blockchain, the crypto industry and labels solve all our trust problems? (Spoiler alert: no, no, sort of).

First question: can we trust AI?  Short answer: yes. Long answer: it’s complicated.

The question of whether we are really capable of trusting machines is hotly debated between those who have faith (“yes, it’s possible!”) and those who doubt (“AI should not be trusted!”). We are on the side of those who believe. We assume that trust between humans can be transposed and that, although different, it is in many ways similar to the trust that humans say they have in machines.

In human relationships, trust can be understood as a coping strategy in the face of risk and uncertainty. This strategy takes the form of an evaluation. For example, the skills of the person to be trusted will be examined. At the same time, the person who places his or her trust is in a vulnerable position. As in any relationship, there is a risk of being hurt. In other words, there is no trust without risk.

This vulnerability is just as critical when it comes to trust between man and machine. To trust a technology is to expect a certain result, a certain behaviour if you use it. The reliability of a system, its poor performance or unclear processes and other factors can alter the trust placed in it. Trust and reliability are thus distinct concepts, unfortunately often confused.

Three things are therefore important to understand: trust is an attitude towards a human or machine third party (1) which is supposed to help achieve a specific goal (2) in a situation of uncertainty (3). I can trust Amazon to deliver my package on time, but not to respect my privacy.

To return to the question posed, we can therefore answer that we are indeed capable of trusting AI in a concrete context. But is that a reason to do so?  That is another question…

Second question: Should we trust AI? Short answer: No. Long answer: It’s complicated.

From a practical and normative point of view, the question of whether we should trust AI is much more interesting, because it shifts the discussion to the topic of trustworthiness. While trust is a human attitude and a complex latent variable in psychometric terms, reliability is a much more technical issue related to the properties of the technology. When Joanna Bryson says that no one should trust AI, her message is very clear: don’t use AI systems (and lots of other systems) blindly.

As an example of blind trust gone wrong, the case of a Tesla driver who lost his life in an accident because he was gambling and not looking at the road at all, thus trusting the system completely is often cited. Whether the fatal accident was the result of overconfidence, misleading marketing promises by the manufacturer, lack of intelligence on the part of the driver, or a combination of all three factors, we will probably never know. In any case, educating people to adopt zero trust in machines is most likely the safest way to avoid injuries.

Not trusting and depriving oneself of a system that could bring better results is not a panacea. The ideal would be to promote “calibrated trust”, in which the user adapts his or her level of trust (whether and how he or she will trust) according to the performance of the system in question. Depending on, or in spite of, the performance, as many companies are known to exaggerate or hide the real capabilities of their products (the advertising arguments should be put to the test).

Thus, calibrating our trust can save lives, but in case of uncertainty and high risk in the human-machine relationship, it is better to adopt zero trust (better safe than sorry).

Third question: Should we stop talking about trust? Short answer: yes. Long answer: it’s complicated.

In our opinion, when people say that AI should not be trusted, the most important message is: think before you act. But thinking is exhausting. Wouldn’t it be great if I could blindly trust a company to respect my privacy and deliver my products on time? Sorry, but blockchain is no help here and don’t even try to dangle a crypto-solution in front of us. A label can be a good start for all things that are not regulated. But aren’t we making things even more complicated by adding another player to the trust equation that we already don’t fully understand? Should we investigate trust in labels as an indicator of trust in machines in the future?

Ultimately, trust as an attitude is an interesting topic for psychologists. But when we talk about machines or features, we should use the right terms and focus on reliability, because that is what we can control best.

Follow-up question: What about law and trust?

Labels are useful to ensure reliability, but aren’t laws a better option? Shouldn’t we focus all our efforts on laws and regulations? Is this our only real indicator of trustworthiness? Firstly, yes: we should put a lot of effort into laws and regulations to ensure designer accountability. Secondly, no: the equation “law and trust” is a false answer. The reason for laws should not be to increase trust, but rather to promote accountability and the proper functioning of society. The fundamental purpose of a law is to establish norms, maintain order, resolve disputes and protect freedoms and rights, not to enhance trust in people or in AI.

Conclusion: Don’t worry about the details

Laws and ethics labels do not solve the issue of trust. In fact, it may even be that the formula “the more you trust, the more you use a technology” does not hold true. People rely on the least trustworthy products for the most irrational reasons – rational homo oeconomicus is dead. Social to the core, today’s humans value convenience and sociality. We like humans, we like to bond, and, having no other behavioural knowledge to mobilise, we humanise even machines.  

This anthropomorphism is not so bad, provided that agents are not purposely designed to manipulate people. Sure, the phrase “trustworthy AI” is anthropomorphic language, but it has the merit of communicating a message that is instantly understood by almost everyone who has any idea of that fuzzy feeling of trust. If you talked about explainable or accountable AI, only a very small fraction of people would understand.

So while the terms ‘trust’ and ‘reliability’ are subject to legitimate criticism in the context of AI, they can also be welcomed. They allow everyone to understand the main reasons for building and using these complex technological objects and their impact on society. Perhaps we would all do better to take things in a more relaxed way and see trustworthy AI as a vision rather than a technically precise statement.

Authors

Marisa Tschopp, Psychologist, Researcher scip AG

Prisca Quadroni, Lawyer, Co-Founder Legal & Strategy Consulting AG 

Marc Ruef, Cybersecurity Expert, Head of Research scip AG

Similar talks

Overcoming the Challenges of AI Development in the Health Sector with Expert Guidance

The challenges to developing AI-based solutions in the health and medical sectors are significant yet navigable with the right combination of expertise.

AI and legal considerations

The prospect of introducing AI in an organization raises many questions. Here the answers to some of them.

Partnership with Modulos AG

Modulos AG and AI LSC join forces to provide Trustworthy AI and Compliance services in line with upcoming AI regulations.