The AI dilemma
Artificial Intelligence (AI) is leading to a profound transformation in our society, bringing about fundamental changes in structures and opening new possibilities such as curing diseases, slowing down climate change, and deepening our knowledge in various fields. Despite these positive aspects, concerns exist among 50% of AI researchers, estimating a 10% likelihood of human extinction due to insufficient knowledge to control artificial intelligence.
In 2017, a significant breakthrough in AI went largely unnoticed by the public. Various areas of AI that were previously divided into technically separate domains such as computer vision, natural language processing or image generation were integrated into one model. The Generative Large Language Multimodal Model (Gollum) makes it possible to interpret, translate and reproduce almost any information. In seconds, AI can mimic human language almost flawlessly. In the future, distinguishing whether we are interacting digitally with a real person or AI may become uncertain. Tools such as OpenAI's "Whisper" can convert speech into text in real time, enabling the conversion of video and voice-based content such as YouTube videos and podcasts into text and significantly increasing the amount of training data. The problem appears to be critical: Sam Altman, one of the co-founders of OpenAI, is therefore developing "The Orb", a technology to confirm the authenticity of people in digital communication. As a kind of response to the ghosts he has conjured up.
The "Gollum AIs" are progressing exponentially, making it difficult for anyone to make accurate predictions about their development. Currently, we lack the technology to fully understand how complex AI systems learn and what capabilities they already possess. Tristan Harris and Aza Raskin, founders of the prestigious "Center of Human Technology", present a rational and appropriately critical perspective on these aspects of AI. Aza's father led the Macintosh project at Apple with the vision that technology should aid humanity – does AI fulfill this goal?
Their key criticisms aspects:
- New technology needs to be controlled. In the case of AI, this has hardly taken place so far. The existing controls are often only implemented years after publication and are then barely able to control the complex systems.
- Our initial contact with A has negative outcomes. Social media platforms and their behaviour and personalization algorithms are causing more societal problems than benefits.
- The media, politicians and society are not in a position to assess the opportunities and risks. As a result, development continues unchecked and regulation is defined on an "outdated software level" and is not very effective.
Author: Jule Witt