Artificial intelligence can make our lives easier in many ways. But the technology also harbors many dangers. Legal scholar Florent Thouvenin is working with academic partners from across the globe to develop ideas about how AI could be optimally regulated.
When the American company OpenAI launched the chatbot ChatGPT at the end of last year, the world was amazed. Many were surprised at what is possible with the help of artificial intelligence. The chatbot can be used to generate more or less elaborate texts, summarize scientific papers briefly and concisely, but also write programs or translate them from one programming language into another. However, the initial euphoria about the potential to make work easier was soon joined by fears and concerns. Although the chatbot can simulate intelligent behavior, it sometimes simply produces nonsense.
In view of the rapid development of artificial intelligence and the social risks posed by this powerful technology, an open letter from the American Future of Life Institute is therefore calling for a six-month break in the further development of AI systems that are more powerful than ChatGPT4. This is to make the software more transparent and trustworthy. The signatories of the public statement include prominent figures such as the Israeli historian and author Yuval Harari and the US entrepreneur Elon Musk.
Florent Thouvenin, on the other hand, has not signed. The UZH legal scholar has long been studying the impact of algorithmic systems and artificial intelligence on society and the challenges they pose for the legal system. Thouvenin is Professor of Information and Communications Law and heads the Center for Information Technology, Society, and Law (ITSL) at UZH. He is skeptical about the requested interruption. "AI is not a miracle tool," says the legal scholar, "chatbots like ChatGPT can do a lot of calculations very quickly - but they can neither understand nor think and they have no will of their own."
Above all, Thouvenin sees numerous opportunities offered by the new technology. It is important to legally define applications of artificial intelligence in such a way that the opportunities can be exploited and the risks minimized. Together with his colleagues, he already gave his thoughts on this in 2021 in a position paper published by UZH's Digital Society Initiative (DSI) (see box). Together with partners in Japan, Brazil, Australia and Israel, he is now analyzing how different legal systems are reacting to the major advances in the development of AI in the "AI Policy Project". It examines countries that - like Switzerland - have to think carefully about how they want to position themselves vis-à-vis the regulatory superpowers of the EU and the USA in order to promote the development of these technologies and at the same time protect their own citizens from disadvantages.
The political discussion on this important topic is still in its infancy in many places. This also applies to Switzerland. Regulation is most advanced in the EU. In June, a draft for the world's first AI law was adopted by EU parliamentarians. Representatives of the EU Parliament and EU member states have now recently agreed on the main features of the "AI Act". The EU's AI law focuses on the risks posed by artificial intelligence and divides them into four categories: from unacceptable risks (including AI systems that can be used by law enforcement agencies to identify people in public spaces in real time through remote biometric recognition) to low-risk applications. Chatbots such as ChatGPT would remain permitted under this regulation, but would have to become more transparent (e.g. by recognizing deepfakes as such).
"There is a danger that AI legislation will hold back the technology without resolving the problems."
- Florent Thouvenin, legal scholar
Florent Thouvenin is critical of the European Union's proposal. "In its AI law, the EU is trying to regulate the technology as such," says the legal expert, "which means that artificial intelligence must first be defined." This doesn't seem to make much sense because the technology is developing rapidly and the definitions and many standards contained in the law will be outdated just as quickly. This problem already became clear when working on the drafts for the AI Act, in which different definitions of artificial intelligence were used. Once a definition had been agreed, ChatGPT arrived and the definition had to be fundamentally revised again. Thouvenin: "There is a risk that the AI Act will inhibit the development and use of the technology and cause a lot of bureaucratic effort - without solving the specific problems."
For example, when it comes to preventing discrimination - for example when looking for a job. Large companies are already using AI systems to select applications. These systems can discriminate against people if they have been trained with data that contains a bias. A well-known example is that women are discriminated against when applying for jobs in the field of IT because the data used to train the systems shows that more men than women have been hired so far. "That's problematic," says Thouvenin, "we have to find solutions to these and similar concrete problems."
For example, by supplementing data protection law with a new principle according to which no one may be discriminated against in a legally relevant manner on the basis of their personal data. The legal scholar is convinced that the legal system in Switzerland does not need to be rethought because of AI, but rather to ensure that existing standards also work in this context. Some standards and laws would have to be adapted to the new possibilities opened up by AI. For others, it is sufficient for the courts to apply the existing standards to the new phenomena in a meaningful way.
Switzerland has not yet begun to comprehensively analyze the challenge of AI and develop suitable legal solutions. Many other nations are in the same situation. "Countries in other parts of the world often have a completely different view of the AI problem than we do here in Europe," says Thouvenin. It is therefore helpful for the upcoming political discussions to see how different legal systems and cultures deal with AI. In order to shed light on this diversity, he and his Zurich colleague Peter Picht have launched the AI Policy Project together with researchers from Kyoto University in Japan. In the meantime, a small network has emerged, which also includes scientists from Australia, Israel and Brazil.
"In Japan, for example, the phenomenon of AI is perceived very differently than here," says Thouvenin, "people there have high hopes for the technology." And in contrast to Europe, the discussion surrounding AI revolves much less around the individual and much more around the collective. This became clear to the legal scholar when he spoke to his Japanese colleagues about the danger of manipulation by artificial intelligence. "For us, this is primarily about the individual and their autonomy of thought and action," says Thouvenin, "if this is restricted, we find it highly problematic." In Japan, however, the autonomy of the individual is not so central and the manipulation of citizens makes sense if it benefits society as a whole.
For example, when people are digitally "nudged" - nudged - by receiving certain information that steers their behavior in the desired direction. Thouvenin is convinced that such an unfamiliar view of AI could also enrich the discussion in Switzerland. "A global perspective can make it easier for us to better assess our scope for action, and it can help us to develop new and interesting ideas for dealing with AI."
The AI Policy project researchers are currently developing a website on which the various approaches and ideas for solutions under discussion in the participating countries will be compiled. In future, this website will be supplemented by the positions of other countries. The aim of the platform is to stimulate the regulatory discourse internationally and to support decision-makers in politics, business and associations in dealing with the topic in a differentiated and informed manner. This is also the case in Switzerland, where the Federal Administration will draw up a political overview by the end of next year and identify the need for action and options for measures.
In the position paper "A Legal Framework for Artificial Intelligence", scientists from various disciplines addressed legal issues relating to AI in 2021 - it was produced as part of the University of Zurich's Digital Society Initiative. In their statement, the scientists assume that the use of AI will not lead to completely new legal challenges, but can largely be solved through sensible application and selective additions to existing laws. They also argue that Switzerland should develop solutions independently of the EU.
In particular, the researchers identify five challenges that need to be addressed legally: The use of AI should be recognizable for users and its functioning should be comprehensible; it should neither discriminate against people nor manipulate them in a problematic way. Safety and liability issues that arise in this context also need to be regulated. For example, the question arises as to who is responsible and liable for accidents caused by autonomous vehicles or drones.
The position paper "A Legal Framework for Artificial Intelligence" can be found on the website of the Digital Society Initiative (DSI) and the Center for Information Technology, Society, and Law (ITSL).