New research used NamSor Gender API to allocate likely gender to authors in The Lancet Global Health scientific journal. Findings show significant gendered differences with geographic disparities.
Gendered geography: an analysis of authors in The Lancet Global Health
Academic career advancement is largely driven by peer-reviewed research, with the number of publications and author rank representing important measures of distinction and productivity.There is, however, a persistent gender gap in academic publishing; although authorship by women has risen substantially since the 1960s and the raw publication count is becoming increasingly equal between women and men, men still dominate the coveted first and last author positions, along with single authored papers, and women are still in the minority as authors. An analysis of author gender in The Lancet journals, for example, found that only a third of all authors were women.
In the field of global health, authors from low-income and middle-income countries (LMICs) are known to be underrepresented, but the role of gender and its interaction with geography among publications within the field remains poorly understood.We did an automated bibliometric analysis by extracting the full name, author rank, and country affiliation for the authors of articles published in The Lancet Global Health (excluding corrections and editorials) from its launch (June 1, 2013) to Dec 1, 2018. Full names were used to approximate the author genders using NamSor, an automated gender-matching software program. Country affiliations were extracted from the author affiliations and matched to the 2018 World Bank income classification system for countries. If authors reported institutional affiliations in more than one country, for country association calculations we counted authors in each of their reported countries. Author rank was determined on the basis of the order in which the authors were listed in the manuscript. […]
The full text is available here.
This new innovative research was well received by the Global Health community.
Gender imbalance in medical reseach may be one of the historical cause for a lack of interest into women-specific health issues. Medical trials too can be biaised, as revealed in Gabrielle Jackson’s well documented article in TheGuardian, The female problem: how male bias in medical trials ruined women’s health.
And since artificial intelligence (and supervised machine learning in particular) tend to reproduce such biases, any A.I. trained on historical data will tend to ignore women-specific health issues as well.
We are continuously improving NamSor Gender API to provide research-grade name gender inferrence. Our latest release (v2.0.7) includes a improvement on how we report the Score. We now normalize the score to a calibrated probability. The updated documentation pre-print can be found on ResearchGate (DOI: 10.13140/RG.2.2.11516.90247).
But statistical reporting is not enough, when it comes to addressing the issue of the A.I.’s gender biases in such critical matters as Global Health. What we would like to design is new online service that will formally quantify human diversity in population samples. It will help test externally any machine learning (ML) algoritm for gender, racial or ethnic biaises. This new API will be designed for the algorithm developers themselves and be relevant to any funnel-like process (for example a medical trial, but other business processes as well such as : grant allocation, start-up financing, credit checking, recruitment, …)
We’ve drafted the design document : NamSor_DiversityMetrics_AI_Biases_Estimation_v002 (PDF)
Our current target is to have a BETA version available early in January 2020 and to lauch this new service just before the OECD Forum in Paris.
NamSor™ Applied Onomastics is a European vendor of sociolinguistics software (NamSor sorts names). NamSor mission is to help understand international flows of money, ideas and people. We proudly support Gender Gap Grader.
Reach us at: email@example.com