We’ve used DALL-E text-to-image AI to generate portraits, based on names shared by a large number of people. This is our second blog post in our series with tag #thisnamedpersondoesnotexist, exploring how text-to-image AIs interpret personal names. We believe this project can illustrate the complexity of personal names interpretation, at the crossroads of ethnography, sociology, sociolinguistics, geography, history and, more recently machine learning.
Here are two portraits generated by DALL-E based on the prompt “A portrait of FATIMATA SAWADOGO”.
This Adama SAWADOGO does not exist
We’ve also used the prompt “A portrait of Adama SWADOGO” to generate a male face, interestingly the AI chose a different style altogether.
Machine Learning Fairness and AI Ethics implication
This is quite impressive isn’t it? If a “blackbox” AI can interpret so well a name, it can also lead to disastrous effects when applying machine learning to decision making with potential discriminatory effects. At NamSor, we recognize the incredible level of complexity of biases in machine learning (gender bias, racial biais, biases based on the country of origin of an individual or other biases). We believe the effect of ML on real people’s life should be evaluated, to be fair and provide equal opportunity to all. Statistical fairness evaluation tools, such as Chicago University’s open source tool Aequitas, or the use of diversity indexes applied to bio-cultural diversity (Shannon index, Simplson index) could be used to define AI fairness quality labels.
This original article was also freely adapted in Spanish.
NamSor™ Applied Onomastics is a European vendor of sociolinguistics software (NamSor sorts names). NamSor mission is to help understand international flows of money, ideas and people. We proudly support Gender Gap Grader.