An allegory of Justice with the European flag (generated by DALL-E Generative AI)

Legal protection against biases in AI: the US vs Europe

Posted by

Cases of biases in AI are becoming more and more prevalent and their dangers can be significant. Biased AI systems can lead to unfair and unjust outcomes, such as denying individuals access to jobs, housing, and other opportunities based on their race, gender, or other protected characteristics. Biased AI can also perpetuate and amplify existing societal inequalities and discrimination.
The most notorious cases happened in the US, with the recruitment algorithm employed by Amazon which discriminated against female applicants and the risk assessment algorithm discriminating against Black defendants as recidivists. In Europe, the most notorious case is the Dutch  SyRI case, in which Dutch authorities used a System Risk Indication algorithm (SyRI) to identify potential social welfare fraud. However, the algorithm mistakenly labelled people with an immigrant background as committing fraud.

What is then the legal protection from biased algorithms in the US and Europe?

In the United States, there is currently no specific legal framework addressing biases in AI at the federal level, but there are a number of existing laws and regulations that can be applied to AI, such as discrimination in credit, employment, and housing. For example, the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act (FHA) prohibit credit and housing discrimination, respectively, based on certain protected characteristics, including race, color, religion, and national origin. Nevertheless, these laws fail to address the peculiarities and challenges that biased AI pose, as they were designed to tackle more ‘traditional’ forms of discrimination. In fact, in addition to these laws, some states and municipalities in the US have also implemented laws and regulations aimed at addressing biases in AI to fill the existing gaps. For example, New York City has passed a law requiring that certain city agencies disclose information about the use of automated decision systems, and California has passed a law requiring that companies disclose certain information about their use of AI.

However, while these laws provide some protections against discrimination and bias, they do not specifically address the unique challenges posed by AI, and the application of these laws to AI is still evolving.

Like in the USA, in Europe discrimination caused by AI can be addressed through laws addressing discrimination. For those states belonging to the European Union, the Equality Directive (also known as the Gender Equality Directive or Directive 2006/54/EC) is one of the main instruments addressing discrimination as it requires EU member states to ensure equal treatment between men and women in areas such as employment, education, and access to goods and services. In the context of the Council of Europe, the European Convention on Human Rights also represent a legal mean to protect individuals from biased AI.

However, contrary to its oversea partner, Europe has focused its attention on the specifics human rights challenges posed by AI and has developed international standards that are now considered as a bechmark worldwide. In particular, the European Union produced the General Data Protection Regulation (GDPR), which provides that data used in AI systems be collected and processed in a way that is fair, transparent, and respects individuals’ rights. In addition, the GDPR requires that individuals be informed about the processing of their data, including the use of their data in AI systems, and have the right to access, rectify, and delete their data. The GDPR is now considered a benchmark for laws addressing the challenges posed by AI worldwide.

Additonally, EU Artificial Intelligence Regulation, which is now being discussed, aims to establish a comprehensive framework for the regulation of AI in the EU, with a specific focus on addressing biases in AI. The regulation will require organizations to conduct impact assessments to identify and address potential biases in AI systems, and to implement measures to prevent such biases from having a harmful effect on individuals.

In conclusion, while the US does not have a specific federal law addressing biases in AI, there are existing laws that address specific types of biases, such as discrimination in credit and housing. However, these laws may not provide sufficient protection against the challenges posed by AI. In Europe, the legal framework is more developed and advanced. The most important instrument is the GDPR and, if approved, the upcoming EU Artificial Intelligence Regulation.

About NamSor

NamSor™ Applied Onomastics is a European vendor of sociolinguistics software (NamSor sorts names). We proudly support Gender Gap Grader.