de en it fr

Law on the Protection of Whistleblowers in the Context of Generative Artificial Intelligence: An Experimental Investigation into Potential Abuses and Their Dogmatic Implications

This article addresses the issue of abusive reports by whistleblowers that can be generated using artificial intelligence. This matter is subjected to an experimental investigation based on the Law on the Protection of Whistleblowers and the Law on Due Diligence in the Supply Chain (LkSG).

Whistleblowing represents a crucial element in revealing and reporting irregularities and crimes within a company. Laws like the LkSG and the Law on the Protection of Whistleblowers offer, on the one hand, protection for the presumed identity of whistleblowers and, on the other hand, incentivize companies to engage in law-abiding activities. However, both the LkSG and the Law on the Protection of Whistleblowers can be misused, and the risk increases significantly when considering the use of artificial intelligence systems for malicious purposes. Artificial intelligence systems can generate many false reports against companies, thus acting as false whistleblowers. This entails a significant expenditure of resources for companies and could cause lasting damage to their reputation.

In the article, the author assumes the role of a potential creator of false reports and explores how to create various fabricated and anonymous reports. Through a realistic experiment, the author seeks to obtain the necessary information and procedures by asking various questions to the artificial intelligence system. The author varies the reports so that no predictable patterns emerge. The experimentation focuses on communication with artificial intelligence, formulated in different questions to simulate various scenarios. For example, the author aims to discover how a report lacking evidence can appear so credible as to be taken seriously by the company. The artificial intelligence provides an optimal response to the question, which could be immediately leveraged by the presumed authority for a false report.

The article highlights that even individuals without advanced computer skills can generate reports on a massive scale. Current solutions have significant drawbacks. One possible solution could be the verification of reports to detect potential abuses. However, if the number of reports is high, the accuracy of verification decreases, with the risk that legitimate reports may be overlooked or verified with poor precision.

The author of the article is Dr. iur. Dr. rer. pol. Fabian Teichmann, LLM, and it was published in August 2023 in NZWiSt. Fabian Teichmann is a lawyer active in Switzerland and is the Managing Partner of Teichmann International (Schweiz) AG. He is also a notary public in St. Gallen and a lecturer at various national and international universities.