ERGO GPT - an internal, secure alternative to ChatGPT


Interview with ERGO AI expert Antonia Schiller

Digitalisation & Technology, 19.06.2024

A safe-to-use AI language model for the entire workforce? Now a reality at ERGO! More than 35,000 colleagues, including independent sales partners, already have access to ‘ERGO GPT’ in Germany. And: The rollout to other markets is in progress. ERGO AI expert Antonia Schiller reveals more about the background and typical application scenarios of this ‘Large Language Model’ (LLM).

AI supports work

Hello Antonia, congratulations on the successful rollout of ERGO GPT! With this LLM, you have provided ERGO colleagues in Germany with a secure, internal alternative to ChatGPT from Open AI ...

Thank you very much, I am happy to pass this on, because the introduction of ERGO GPT was of course teamwork! When the hype about ChatGPT from OpenAI broke out at the end of 2022, we at ERGO reacted quickly and set up a project to investigate and utilise the opportunities of LLMs for our – extremely data-driven – industry. We called this project ‘GOLLM’, pronounced ‘Golem’ – and it was great to see how many colleagues from all kinds of departments wanted to get involved! Now, almost a year and a half and a very intensive test phase later, the time has come: our very own LLM, ERGO GPT, is running stably and is already being actively used ...

Exciting, could you give us an example? What are the typical uses of an in-house LLM in the day-to-day work of an insurer? In other words – and with a view to our brand essence – what makes ERGO GPT ‘easier’ for our employees?

Well, first and foremost, ERGO GPT is suitable for texts and other office applications: For example, many colleagues are already using it to create their intranet articles, social media posts, job advertisements, emails or Excel formulas. Optimisations or general changes to existing documents are also very popular – for example, when a text needs to be made even more precise or we want to change the wording of letters from ‘Sie’ to ‘Du’. Another practical feature is that long texts can be summarised concisely at the touch of a button or we can have questions about the content answered. And: if someone needs an intellectual counterpart, ERGO GPT also slips into the role of a critic or supervisor and discusses and scrutinises the ideas pitched.

We have also identified typical simplification options for the individual job profiles at ER-GO: Let's think, for example, of a clerk who receives a letter from a customer and wants to write a reply that is ‘friendly but firm’. Or let's think of a developer who has parts of her code checked and, if necessary, repaired by ERGO GPT. Or a communicator who wants to have their article optimised for their target group and from a keyword and SEO perspective. As you can see, basically every colleague at ERGO can benefit from the new service. As with all technologies, the more we try out, test and learn and make full use of the programme, the more we can tease out of ERGO GPT – and the greater the potential for use and simplification in our day-to-day work!

“The more we try out, test and learn and make full use of the programme, the more we can tease out of ERGO GPT – and the greater the potential for use and simplification in our day-to-day work!”

Antonia Schiller, AI expert at ERGO

AI expert Antonia SchillerSpeaking of learning: What training courses do you offer or are you planning to offer to familiarise users with this fascinating solution? To utilise the best possible prompts, etc.?

In order to get as many colleagues as possible excited about the new offering, we advertised the upcoming rollout widely via all internal communication channels long before the rollout. And – when the time came – we presented the solution to each and every one of them in a welcome email. This included a link to an introductory video, which we made available on our learning platforms with the help of avatars. In addition, we have set up an ERGO GPT channel at Teams to promote the exchange of experiences, answer questions and stay up to date. We also offer internal webinars in our ‘ERGO GPT Lounge’, where people can find out more and ask questions. Several departments have already actively approached us and asked us to organise presentations, workshops or hackathons at their premises. We are very pleased to see an active pull from colleagues from all possible parts of ERGO in Germany and are always happy to pass on our knowledge!

Great, that sounds comprehensive! Could you also give us a few tips on how to use it? Our //next readers will certainly be interested in this when dealing with generally accessible LLMs ...

The following rules have worked well for us and of course also work with other LLMs:

  1. clear instructions: use clear instructions to create appropriate results.
  2. empty chats: The LLM remembers previous entries as long as the browser window is open. The previous chat should therefore be deleted when a new enquiry is made.
  3. assign roles: As already mentioned, the LLM can take on different roles and additionally target responses to specific audiences
  4. concrete examples: they help to provide solutions or specific formats.
  5. prompts: By combining roles, instructions and examples, powerful prompts can be created: instructions to the LLM. These are best stored separately as a template for future entries!
  6. order: The LLM considers input at the very beginning or at the very end to be particularly important. With long prompts, it therefore makes sense to give important instructions at the beginning and at the end.

Speaking of ‘other LLMs’: What technological framework is ERGO GPT based on? And what distinguishes it from the publicly accessible or subscription-based (paid) offer?

Well, we have indeed used ‘ChatGPT 3.5 Turbo’ as the technological framework. The switch to even more advanced versions is being planned, and we are also working on the rollout in other ERGO core markets at the same time. At the same time, we have ensured that ERGO GPT does not allow our colleagues' entries to be saved and processed by the provider for training purposes. This enables us to process even internal and confidential data. As you can see, with ERGO GPT we have created an ERGO-internal solution that makes it possible to use large language models (LLMs) securely in the insurance context!

Interview: Ingo Schenk

Your opinion
If you would like to share your opinion on this topic with us, please send us a message to next@ergo.de.

Related articles

Digitalisation & Technology 27.09.2023

How ERGO uses "Large Language Models"

ChatGPT has been the talk of the town for just under a year, but the underlying "Large Language Models" (LLMs) have been around much longer. ERGO has been working on this topic since 2019 and has built up a huge amount of expertise since then. Sebastian Kaiser, Head of Machine Learning at ERGO, Nicolas Konnerth, Head of Conversational AI, and Arijit Das, Machine Learning Consultant, provide an overview in this video interview.

Digitalisation & Technology 08.03.2023

What is ChatGPT - and if so, how many?

Everywhere you look, you read about ChatGPT. For many months, //next columnist Markus Sekulla has not experienced such hype. Be in the media or on Twitter or LinkedIn, a lot is currently being written about the prototype of OpenAI's new AI. So it’s time for a closer look.

Digitalisation & Technology 17.03.2023

ChatGPT in the insurance industry

Our experts Jennifer Betz, Jens Sievert and Nicolas Konnerth explain the impact of ChatGPT on the insurance industry and possible use cases for insurance companies.