Site icon NamSor Blog

Corporate Responsibility for AI companies: finding a path through all the regulations

AI Generated Stock image of a robot hand holding different symbols (AI, a balance of justice, fingerprints)

AI Generated Stock image of a robot hand holding different symbols (AI, a balance of justice, fingerprints)

Usually AI companies think that they are developing a product that is going to help society in some way. However, over  time, many allegations of human rights violations arose, from issues with the right to privacy, discrimination, transparency, etc.

Consequently, many organizations started developing standards for states and, most importantly, companies to consult in order to develop products and services that will not negatively impact the rights of those using them. Nevertheless, the amount of guidelines, regulations and principles has become overwhelming and, while some companies are trying to navigate all the guidelines on corporate responsibility and human rights, it is totally understandable that they struggle with finding a way in all this chaos. 

This is why we at Namsor decided to have a chat with Dr. Lottie Lane, an assistant professor at the University of Groningen, who specializes in AI, human rights and corporate responsibility. This interview is meant to guide us through some of the most important business and human rights standards, highlighting issues and future possibilities, as we have just started designing Version 3 of our software.

Interviewer: “There are many documents discussing corporate responsibility for AI companies but, unfortunately they are not all coherent, what challenges does this create?”

Dr. Lottie Lane: “First, there is an issue of vagueness. Many of these guidelines are vague so they can be applied in multiple contexts. This is sometimes because they are making procedural standards rather than substantive ones. This, however, can lead to companies not knowing how to translate these standards into their development process. The practical aspect of how to transpose these standards into their final product or service is extremely complicated. A second issue is accountability. Binding human rights obligations under international law are not directly applicable to companies, so accountability is very difficult to achieve. In general, states are expected to develop the laws necessary to protect individuals but, this can be very difficult in practice too. Even if you manage to hold a business accountable at the national level, it  may only result in a fine, and many multinational companies have the budget for that. You could also try to name and shame companies that do not follow human rights standards, in order to bring bad publicity but it is uncertain how effective that would be in this context.”

Interviewer: “In relation to the first issue of vagueness, do you then think that there is a need for human rights specialists in AI companies to help them navigate these standards?”

Dr Lottie Lane: “Yes, I think companies would benefit from having different personalities that are expert in human rights. But, it is important to make sure that not all the responsibility for the respect of corporate responsibility and human rights falls on this person. Rather, everyone, from engineers to developers to lawyers, need to be trained in order to understand how business and human rights standards apply and how to actually put the standards into practice.”

Interviewer: “So, among all the standards and approaches developed, which do you think are worth taking into consideration?”

Dr. Lottie Lane: “It is true that there are many approaches to choose from and not all are equally precise and concrete, however we can identify a few that are particularly  important for European companies to consider: the UN Guiding Principles on Human Rights (UNGPs), the OECD Business and Finance Outlook, the proposed EU AI Act, and the Draft EU Corporate Sustainability and Due Diligence Directive (CSDDD).” 

Linda: “How can companies choose which human rights to prioritize and how to incorporate a human rights approach into their development process?”

Dr. Lottie Lane: “It is not a matter of choosing which right is more important than the other, whether privacy or discrimination, but it is a matter of risk assessment and prioritization. For example the UNGPs also contain guidance on how to prioritize risk. For example, if the risk to privacy is much more severe than the risk to discrimination, you might address that first and then move on to discrimination. The UNGPs and the CSDDD propose  a severity approach, with the CSDDD also suggesting the likelihood of a risk as relevant to prioritizing responses. On the other hand, the proposed AI Act provides for risk management standards that only requires identification of risks ‘most likely to occur’. These can go some way, but the problem is that often developers cannot identify the actual risks of the product they are developing and whether or not their responses to risks are going to work.”

The interview with Dr Lane provided Namsor with an overview of the approaches to regulate AI in order to mitigate its implication on human rights. In relation to the approach provided in the AI Act, the discussion is still ongoing but  “the proposed AIA also includes a legal basis for the processing of sensitive data to detect, monitor and correct bias that could lead to discrimination.” Art 10(5) provides that:

“To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy-preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued.”

This particular provision would allow a company like Namsor to develop tools to independently assess black box algorithms’ biases, with respect to gender, ethnicity or country of origin. Such tools can be used in the risk assessment process to evaluate the human rights risks of a particular AI product.

Linda Ardenghi

About NamSor

NamSor™ Applied Onomastics is a European vendor of sociolinguistics software (NamSor sorts names). NamSor mission is to help understand international flows of money, ideas and people. We proudly support Gender Gap Grader.

Exit mobile version