The world of artificial intelligence (AI) is evolving at a breakneck pace, transforming industries from healthcare to finance. But as AI systems become more integrated into our daily lives, questions about fairness, representation, and bias are more important than ever. Who is shaping the future of AI research? Are diverse voices and perspectives being included in the conversation?
A recent study published in Acta Radiologica Open—titled “Mapping gender and geographic diversity in artificial intelligence research: Editor representation in leading computer science journals”—sheds light on an often-overlooked aspect of AI development: the composition of editorial boards in top computer science and AI journals.
Why Editorial Boards Matter
Editorial boards play a crucial role in shaping the direction of scientific research. They decide which studies get published, influence research priorities, and set the tone for academic discourse. If these boards lack diversity, there’s a risk that certain perspectives, questions, or even entire fields of inquiry could be overlooked. This isn’t just about fairness—it’s about ensuring that AI technologies are developed with a broad range of insights, reducing the risk of bias and increasing their relevance to global populations.
The Study’s Findings
The study analyzed the gender and geographic distribution of over 4,900 editorial board members across 75 leading AI and computer science journals. Here are some of the most intriguing takeaways:
- Gender Representation: Women made up only 17% of editorial board members, with even fewer in leadership roles like editor-in-chief (14%). This underrepresentation was consistent across most journals, though a few stood out for having more balanced boards.
- Geographic Diversity: Editors were predominantly affiliated with institutions in the U.S., U.K., and China, which together accounted for 50% of all editors. When adjusted for population size, smaller countries like Australia, Finland, and Estonia had some of the highest representations of women editors per million women.
- Impact on Research: The study found a weak but positive correlation between the proportion of women on editorial boards and the journals’ SCImago Journal Rank (SJR) indicator, suggesting that diversity might be linked to journal prestige and influence.
The Role of Data in Understanding Diversity
One of the most interesting aspects of this study is its methodology. To determine the gender of editorial board members, the researchers used Namsor API, an open-source tool that infers gender based on names and geographic context. This approach allowed them to analyze a large dataset efficiently, providing a snapshot of gender diversity across the AI research landscape.
Namsor’s API is particularly useful for large-scale studies like this one. It can process thousands of names quickly, making it easier to identify patterns and trends in gender representation. While no tool is perfect—especially when it comes to non-binary or culturally specific names—Namsor provides a practical way to start conversations about diversity and inclusion in fields where data is often scarce.
Geographic Insights: Beyond the Usual Suspects
The study also highlighted geographic disparities. Most editors were based in the Global North, with very few from the Global South. This imbalance matters because AI technologies developed in one part of the world may not always translate well to others. For example, an AI model trained on medical images from the U.S. might not perform as well in a country with different healthcare conditions or demographics. By mapping the geographic distribution of editors, the study underscores the need for more inclusive representation to ensure AI solutions are globally relevant.
Why This Matters for AI’s Future
Diversity in AI research isn’t just a box to check—it’s a necessity. AI systems are only as good as the data they’re trained on and the perspectives that guide their development. If editorial boards—and by extension, the research they publish—are dominated by a narrow group of people, the resulting technologies may inadvertently reflect biases or overlook the needs of underrepresented communities.
A Call for Curiosity and Action
This study doesn’t claim to have all the answers, but it does raise important questions:
- How can journals encourage more diverse representation on their editorial boards?
- What steps can researchers take to ensure their work is inclusive and globally relevant?
- How might tools like Namsor API help organizations track and improve diversity over time?
As AI continues to shape our world, studies like this remind us that the people behind the technology matter just as much as the algorithms themselves. Whether you’re a researcher, a policymaker, or simply someone interested in the future of AI, these findings invite us all to think critically about who is at the table—and who might be missing.
What do you think? Should journals be doing more to promote diversity in their editorial boards? How can we ensure that AI research reflects the needs of a global population? Share your thoughts in the comments!
redits : summarization and illustration by LeChat MistralAI (Pro Version)
About NamSor
NamSor™ Applied Onomastics is a European vendor of sociolinguistics software (NamSor sorts names). NamSor mission is to help understand international flows of money, ideas and people. We proudly support Gender Gap Grader.

