Digitalisation & Technology, 18 January 2023

Artificial intelligence: Can we trust it?

Talk with Jaana Müller-Brehm and Verena Till, ZVKI

Jaana Müller-Brehm and Verena Till, ZVKI Jaana Müller-Brehm and Verena Till, ZVKI

Just over a year ago, the Zentrum für vertrauenswürdige Künstliche Intelligenz (ZVKI, "Center for Trustworthy Artificial Intelligence") was founded in Berlin. The focus of its work is the protection of consumers: How can it be ensured that AI solutions are trustworthy? How can fair, reliable and transparent AI systems be identified and promoted? ZVKI experts Jaana Müller-Brehm (left) and Verena Till provide answers.

A few years ago, artificial intelligence (AI) was a term that was very reminiscent of science fiction. In the meantime, almost all of us are surrounded by AI solutions in our private and professional lives.

One application that uses AI processes and that many no longer want to do without and judge as trustworthy is the navigation device. Few question the suggestions of their "navigator". But trusting the navigation device can also have fatal consequences: Cars have already landed in rivers more than once because it guided them there.

If AI is taking up more and more space in our lives, then the question arises: under what conditions can we trust AI software? And what does this mean for our society, in which AI procedures are gaining importance? The ZVKI (Center for Trustworthy Artificial Intelligence) has taken up these and other questions and is working on various project components around the topic of "trustworthy AI". The center was founded in December 2021 as a project of the independent think tank iRights.Lab in collaboration with the Fraunhofer Institutes AISEC and IAIS and Freie Universität Berlin. In an interview, project coordinator Verena Till and her colleague Jaana Müller-Brehm explain to us what exactly AI and trust have to do with each other - and what challenges this will present us with in the future.

Focus on consumer protection

Verena Till explains the center's activities: "The ZVKI is a funded project of the German Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV). The aim is to form an interface between science, industry, politics and civil society in order to shed more light on the ethical foundations for AI." The focus is on the question under which conditions AI applications can be considered trustworthy. From the perspective of consumer:inside protection, the aim is to determine how informed the population feels on the subject of AI, what information is lacking, and what framework conditions ensure trustworthy AI. Consumers as well as business and politics are to be sensitized to this important topic, explain the two ZVKI employees.

Whether roadshows, publications or certification for AI products: The spectrum of activities and approaches of the ZVKI is broad. And that is understandable and important, as Verena Till explains: "AI is assuming ever greater relevance in our lives, so the discussion about its social impact must also pick up speed. A number of questions arise in order to steer technological developments in a direction that is desirable for our society. There are enough topics and tasks.

AI is assuming ever greater relevance in our lives, so the discussion about its social impact must also pick up speed.

Verena Till , ZVKI

When dealing with the topic of AI, the first question that arises is the definition. "This already presents us with the first challenge," says Jaana Müller-Brehm. "The term 'artificial intelligence' dates back to the 1950s. At that time, it was associated with the intention of reproducing human intelligence with the help of computers and software. This project can be considered a failure." In the meantime, the two experts continue, the term AI is used much more broadly. AI solutions include self-learning systems that use machine learning techniques to develop algorithms to solve very specific problems. Such programs are used to analyze huge amounts of data and identify patterns and similarities. For example, AI applications on video platforms analyze patterns from the click behavior of users and use them to derive recommendations for content. Often, however, quite simple systems whose codes have been completely specified by developers are also referred to as AI. Such very limited solutions can be found, for example, in customer service in the form of simple chat bots.

"We need to find a bracket for this multitude of methods and approaches that go under the term 'AI'. Often, artificial intelligence is about different methods of machine learning. We therefore distinguish between learning and non-learning algorithmic systems," the experts explain.

For Verena Till, however, it is not decisive in the context of consumer information whether a system or method can really be called "AI" or not. In her view, it is much more important that algorithms and AI systems are playing an increasingly important role in people's everyday lives, and that the question of the trustworthiness of these rapidly growing technologies is therefore becoming increasingly central. Jaana Müller-Brehm takes this idea a bit further: "Technological developments such as digitalization are always double-edged. It's important to promote the positive opportunities and minimize the negative effects - in other words, to take advantage of the scope for design."

We want mature consumers who have the necessary knowledge to better assess the opportunities and risks of AI solutions

Verena Till, ZVKI

By means of various studies, opinion polls and events, the ZVKI is trying to get a picture of the public's opinion on artificial intelligence. In the process, a somewhat contradictory picture emerges, the experts explain. "In general surveys, consumers tend to express skepticism about AI," Jaana Müller-Brehm tells us. "For example, few believe that AI ensures fair results, and the fear of abuse is very high." People are also very distrustful of AI-supported methods in some areas of application, for example in the field of care, in the financial sector or in the administration of justice, she explains further.

On the other hand, the concrete use of AI is often quite uncritical, as if there were blind trust in the technology - for example, in the values of a fitness app or the navigation system in the car mentioned at the beginning.

For the experts at ZVKI, this ambivalence highlights the lack of knowledge surrounding AI. "When we are on the road at events or roadshows, we always hear the desire for more education. This is a key issue from our perspective." There needs to be a better supply of information geared to the target group and a more comprehensible presentation, he said. "We want mature consumers who have the necessary knowledge to better assess the opportunities and risks of AI solutions," is the wish of the ZVKI staff.

Are manufacturers and AI developers also interested in educating their customers in this way? In Till's and Müller-Brehm's experience, the industry certainly supports the ZVKI's activities: "Our experience has been very good. Of course, not all companies cooperate, but a number of important partners are already involved. They definitely see the advantages of building trust in new technologies."

For example, certification of AI solutions is being considered. This would be an important contribution to being able to better assess the trustworthiness of AI software, according to the two ZVKI colleagues. Such certificates can act as a seal of approval and give consumers more security. Whether this actually succeeds depends on how certification processes are designed. This includes, for example, whether the testing of corresponding applications is carried out by external experts. At the same time, standards, certificates and seals are only one component of many in designing AI systems in such a way that they do not harm individuals or society.

What does "trustworthy AI" mean?

In order to flesh out the idea of certification and further measures, it is first necessary to clarify what "trustworthy AI" actually means. "Clearly defining the term trust cannot be a task for us as ZVKI. A wide variety of disciplines have been discussing this for centuries," explains Verena Till. "We need a practical approach. Our question is: What does trustworthiness mean in connection with AI technology?" To find an answer to this question, the experts go on to explain that the term trust is broken down into individual aspects. Among other things, the ZVKI has identified the aspects of "fairness," "reliability" or "transparency" as essential building blocks of trustworthy AI.

To assess an AI solution in terms of its trustworthiness, it is necessary to check how the respective software was programmed, explains Verena Till. In other words, you look at whether a solution has been set up in such a way that it delivers fair, reliable and transparent results for the particular use case.

According to the experts, the example of reliability makes it clear that AI procedures cannot be assessed independently of their application contexts. For example, they said, an AI-supported production system does not show fatigue and can therefore theoretically operate at a fairly constant output throughout. "This and the evaluation of production processes can reduce scrap and material waste," says Jaana Müller-Brehm. She adds that, on the other hand, an AI solution based on faulty pre-assumptions can automate and thus multiply unreliability. Moreover, societal stereotypes and biases can be carried into AI applications at various points of AI development and deployment. They are reproduced again and again through the use of such an AI application. "Developing and making visible methods to uncover such mechanisms is one goal of our work," explain Verena Till and Jaana Müller-Brehm.

Since the beginning of the project, the ZVKI has created structures and implemented measures to discuss the topic of trustworthy AI in an interdisciplinary manner. A network has been established that brings together stakeholders from science, business, politics, culture and civil society, as well as their approaches and ideas.

"We hope that in the coming years we will succeed in establishing a sustainable and possibly also internationally networked platform that will ensure the dialog of business, science and politics in the future," say the two ZVKI employees.

The project is therefore currently also focusing on the establishment of a non-profit association that will continue to pursue the project goals beyond the funding phase.

For the two ZVKI employees, the issue of consumer information and education, as well as the question of certificates, cannot be solved conclusively in one or two years: "Technical developments are rapid and dynamic. We have to react to this again and again. Technological progress should be coupled with a discussion about ethics and values so that we can ensure that technological development takes a socially positive direction."


Your opinion
If you would like to share your opinion on this topic with us, please send us a message to: next@ergo.de


Further articles