Simple because it matters.
Simple because it matters.
Digitalisation & Technology, 27 February 2023
Generative AI tools such as ChatGPT or Bart from Google are currently outshining everything and have also tempted us at //next to try things out for ourselves. Compared to many other hypes, however, this one is more than justified - because the probability that we are witnessing the beginning of a new digital development stage here is very high.
Technology trends can basically be divided into three categories. There are those that are traded as trends for a long time and then take forever until we can do something with them. The Internet of Things (IoT) is a good example of this. For a long time, we were promised intelligent refrigerators that would refill themselves. Today, we can at least switch on the lights and the heating in the house while we're on the move, so that it's bright and warm when we get home.
Then there are trends that we can use immediately when we have access to them, but which often disappear just as quickly as they came. We certainly remember Clubhouse.
But there is a third category of trends. This is where developments suddenly reach a new level of quality, at least in a perceived way, and we wonder why it didn't happen before and what we did without it before. A prominent example is the iPhone, with which Apple finally made the leap from mobile phone to smartphone.
AI tools also fall into this trend category.
With ChatGPT, applications from the field of generative AI are currently experiencing their own iPhone moment. Just as there were first smartphones before the iPhone, these tools are not entirely new. In fact, AI has been doing a lot of work for a number of years without us always being aware of it. For example, we would hardly recognise Facebook or LinkedIn without AI algorithms, and modern smartphones would not recognise us without AI-based facial recognition. In many cars, too, intelligent assistance systems are taking over more and more of our work and protecting us from danger. All these AI applications have one thing in common: if you give them a task that is outside their programming and training, they fail even at a low level of difficulty.
Originally, science defined artificial intelligence as the ability of a machine to imitate human abilities such as logical thinking, learning, planning and creativity. Today we distinguish between "strong AI", where an intelligent computer system is indistinguishable from the human mind, and "weak AI", where algorithms can take over very specific tasks after training. All AI applications known so far are weak AI.
The difference that ChatGPT in particular makes now is active participation. We can easily try out this AI tool ourselves and actively experience its amazing capabilities. ChatGPT has thus reached a new milestone in market penetration: within just five days, the one million user:in mark was broken. The previous record was held by Instagram with 75 days. After only two months, ChatGPT already had 100 million monthly active users.
In March 2023, Open AI launched the enhanced version GPT-4: https://openai.com/product/gpt-4
Basically, tools of Generative AI first need as large a quantity of data as possible from which they can later recognise and reproduce patterns. The larger the amount of data, the greater the probability of finding suitable answers to our questions in the end. However, in order for an AI to establish meaningful connections between its database and our requirements (prompts), intensive training is necessary.
ChatGPT, currently the most prominent AI tool, relies on the Large Language Model GPT-3 as its database. GPT stands for "Generative Pretrained Transformer" and already aptly describes the function: it is about generating texts from a pre-trained database.
The ChatGPT provider OpenAI simplifies the training in three steps.
Data-based training is the most complex part, as human trainers control the learning process. This step has proven to be very important, even though AI systems can now certainly learn on their own. However, this means a certain loss of control, as public AI experiments have shown in the past.
In 2016, for example, Microsoft had to shut down an AI chatbot called "Tay" after just a few hours. It was supposed to learn how young people communicate as a female avatar on Twitter. But that backfired badly, because the self-learning bot quickly became a "racist monster".
With ChatGPT, OpenAI therefore relies on "supervised learning" in the first two training steps, which offers better control as supervised learning. In addition, the AI was specifically trained to formulate its answers as part of a conversation. The bot remembers previous questions and can thus imitate a more realistic course of conversation.
Since ChatGPT can be tested free of charge, there are basically two camps among the testers. Some already see the possibilities as an outstanding step forward in development and want to use the tool for all text work in the future. The other part of the testers is much more sceptical and points to the limited database (until 2021), recurring errors or poor text quality with always the same structures.
As so often, the middle ground between the extremes is a reasonable assessment. With as good a prompt as possible, ChatGPT can certainly achieve good results. As with commissioning human authors, the quality of the briefing is crucial to the quality of the results. So we still have to think very carefully about what text we actually want.
We should always keep one important aspect in mind: AI tools of the current generation produce amazing content, but they do not "understand" what they deliver to us. They recognise and reproduce patterns, no more and no less. This means, for example, that misinformation can become a pattern if it occurs often enough in the database.
Beyond that, there are still some legal questions to be clarified. The question of authorship alone must be considered from different perspectives. For example, an AI tool is not recognised as an author, since our copyright law only recognises human authors. In addition, there is the question of how data mining is handled legally.
The first lawsuit is already pending in the USA. The image agency Getty Images is suing Stability AI because its AI image generator is said to have infringed copyrights millions of times over by using the images for training. This could be the beginning of a wave of lawsuits, as AI images from various tools are repeatedly circulating on various social networks, even bearing a watermark from a picture agency.
With ChatGPT, OpenAI has massively accelerated a development that was previously rather hidden. New announcements show that it has not been slept on even by tech giants like Google. The next months and years will be marked by further developments, because the current state is only the beginning.
And here we come full circle to the iPhone analogy: when we pick up the first iPhone today, we can hardly comprehend the magic of that time. But this device, so limited in comparison to today's models, has contributed decisively to Apple's rise to become the most valuable brand in the world. AI applications today are on par with the first iPhone - we cannot even begin to imagine how they will develop over the next 15 years.
As long as the legal issues are not clarified, content from generative AI should only be used with caution, at least for commercial purposes. However, the future of these tools could also lie in a different direction than the generation of complete content: If image or text generators only take on certain tasks that they are particularly good at and that are rather time-consuming for humans, they will become useful AI colleagues.
Text: Falk Hedemann
How dangerous is ChatGPT for insurance companies? (German)
Your opinion
If you would like to share your opinion on this topic with us, please send us a message to: next@ergo.de
Further articles