Generative AI: challenges of regulating it
The “Internet for Trust” conference was held in Paris by UNESCO to examine a set of proposed worldwide regulations for online platforms that would protect information access and freedom of expression. One of the focuses of the different conferences held was the challenges of generative AI. Generative AI refers to the technology that involves creating models that can generate new data, such as images, text, and music. It has made significant advancements in recent years. ChatGPT is the prime example of generative AI at the moment, and it has been at the center of the debate during the panel discussion. ChatGPT has had an enormeous reach, with more than 100 million users within the first two months of its launch and more than 13 million daily visitors as of 2023. However, along with these developments several ethical concerns arise, especially when the user population is gigantic. In the current state of AI regulation, technologies are developed very rapidly, and national/european laws are struggling to catch up with them. In order to fill this gap in regulation, ethical frameworks for Artificial Intelligence are being developed by different actors, including UNESCO and OECD – providing input to European and national legislators to address the issues and propose new laws.
One of the primary concerns discussed at UNESCO is the use of generative AI to create misinformation. Generative AI can create realistic fake content, such as deep fakes, fake news, and fake reviews, which can spread misinformation and manipulate public opinion. This has the potential to cause significant harm to individuals and society as a whole. As explained by Mr. Grady Brooch, Chief Scientist for Software Engineering at IBM Research, generative AI like ChatGPT deals with thousands of dimensions when generating information. This results in an assumption of trust of the user towards the system. In general, users will realize that ChatGPT is incorrect only if they are already very familiar, or even experts with the topic discussed. Nevertheless, generative AI is an unreliable narrator and when you publish unfiltered content into the world wild web, users might believe it!
Generative AI can create new risks to human rights. One concern is the potential invasion of privacy, as it can be used to generate synthetic data that can re-identify individuals, de-anonymize sensitive data, or invade people’s privacy. Moreover, generative AI models can also perpetuate biases which pre-exist in the training data, leading to unfair and discriminatory outcomes. It can strengthen stereotypes about specific groups. It is crucial to ensure that these models are trained and tested in a way that is fair to all groups. Unfortunately, private companies are not bound by any international human rights instrument to do so, and this is the reason why it is so important that states adopt a regulatory framework.
Intellectual property is another area of concern. Generative AI can create realistic replicas of copyrighted materials, such as music, art, and film, which can infringe on the intellectual property rights of creators. But the ability of AI to generate art is not a completely negative factor; in fact there are new types of artists that are making creative use of AI.
UNESCO guidelines for regulating digital platforms
After developing ‘Recommendations the Ethics of Artificial Intelligence’, UNESCO is moving towards guidelines for regulating digital platforms for information as a public good. The guidelines focus on a multi-stakeholder approach that involves governments, private companies, international organization and civil society to regulate digital platforms. The principles are intended to aid in dealing with content that may harm human rights and democracy while preserving freedom of expression and the accessibility of accurate and trustworthy information by regulators, governments, legislatures, and businesses.
One of the focuses of the guidelines in content moderation in digital platforms, in order to avoid fake news, disinformation but also hate. Éric Garandeau, Director of Policies and Governmental Relations at TikTok, stated that in this respect they have at least 10,000 moderators for billions of users that, combined with the help of AI, identify images and videos that could be offensive or violent. According to him, the combination of human and AI is key to content moderation and a useful development can be found in the option of user notification, where users actively participate in the content moderation process. However, content moderation is not as easy as it was portrait, since there can be up to a million videos a day to analyze, in different languages. Another issue relates to the capacity of these moderators to handle the amount to content to review. In addition to this, moderators, whether human or AI, face the difficulties of cultural relativism and how what is considered offensive and inappropriate changes deeply from culture to culture.
Generative AI and Content Moderation commonalities
Moderating content in social networks and choosing which data should be used to train a Generative AI has many aspects in common. Both tasks are extremely complex, as they require balancing different rights – for example : freedom of expression, privacy rights of individuals or human rights. In practice, an organization’s content moderation is driven by specific values and cannot be entirely neutral with respect to the organization’s culture or even ideology.
Since 2012, Namsor has been building algorithms to identify and measure gender balance in media / social media, as well as other diversity analytics such as : race, ethnic, socio-cultural diversity. We believe that measuring cultural diversity, racial/ethnic or socio-cultural biases in media/social media historical datasets can help correct biases in machine learning algorithms to maximize the diversity of point of view, fairness and balance of soft power. This is not a sufficient condition to stop hatred, but it is a necessary condition for people to feel fairly represented according to many dimensions (gender, country of origin, language, race/ethnicity).
The other aspect where Namsor can be useful is to ensure a strong diversity in all teams involved in Machine Learning, from development to content moderation, as well as legal teams and governance. As was pointed out during the panel discussions, whites make up the largest ethnic group among software engeneers, and male engeneers make up over 70% of this specific group. Such lack of diversity inevitably leads to the reproduction of very exclusive patterns. Ensuring diversity in the development process is necessary for designing machine learning algoritithms do not learn from biased datasets and, consequently, do not reproduce such bias.
In conclusion, the UNESCO “Internet for Trust” Conference provided a great opportunity to better understand the challenges faced by policy-makers in regulating AI, as well as the many opportunities that these technologies provide. It is clear that Namsor can represent a useful tool to help policy-makers, companies and other organizations to conduct such assessments. For example, to make sure expert groups are representative of UNESCO members according to various dimensions (gender, linguistic diversity or country of origin).