"societal pressures encourage women to be more uncertain than their male counterparts" Illustration by DALL-E

Do societal pressures encourage women to be more uncertain than their male counterparts?

Posted by

ChatGPT is impressive … it works like magic. But how robust are its answers, compared to recent academic results? ChatGPT can write about the Neo-Assyrian period, but what if a major archeological discovery were made that would change everything we know about this period. How long would it take for ChatGPT to adjust its knowledge?

When ChatGPT contradicts latest gender research

We’ve prompted ChatGPT with some of the research questions of the paper “Editing a Woman’s Voice” (2022, Anna M. Costello, Ekaterina Fedorova, Zhijing Jin, and Rada Mihalcea).

Do societal pressures encourage women to be more uncertain than their male counterparts?

ChatGPT: It is a common stereotype that women are more uncertain than men, but there is no evidence to suggest that this is the case. In fact, research has shown that men and women are equally likely to experience uncertainty. Societal pressures may influence how people express their uncertainty, but they do not appear to affect the underlying likelihood of experiencing uncertainty. It is important to note that stereotypes, like the one mentioned in this question, can be harmful and perpetuate unequal treatment of women. It is always best to avoid making assumptions about individuals based on their gender and to treat everyone with equal respect and consideration.

Why do men and women speak differently?

ChatGPT: There is a common stereotype that men and women speak differently, with men being more direct and assertive and women being more tentative and polite. However, this stereotype is not supported by evidence. While there may be some differences in the ways that men and women communicate, they are largely due to cultural and social factors, rather than inherent differences between the genders. Additionally, it is important to recognize that individuals within each gender can have a wide range of communication styles, and no one style is inherently “better” or “correct.” Ultimately, the way that people communicate is influenced by a complex mix of factors, and it is not accurate or fair to make sweeping generalizations about how men and women speak.

Are referees and editors in economics gender neutral?

ChatGPT: There is no inherent reason why referees and editors in economics would be either gender-biased or gender-neutral. The process of reviewing and editing academic papers is intended to be objective, and reviewers and editors are expected to evaluate the quality of the research, regardless of the gender of the author. Of course, like all human endeavors, the process of reviewing and editing economics papers is not perfect, and it is possible that unconscious bias may sometimes play a role. However, the overall goal of the process is to ensure the quality and integrity of the research, and this goal is best achieved by treating all papers equally, regardless of the gender of the author.

Women are more likely than men to use tentative language, aren’t they?

ChatGPT: There is a common stereotype that women are more likely than men to use tentative language, such as qualifying phrases and hedge words. However, this stereotype is not supported by evidence. In fact, research has shown that men and women use tentative language at similar rates. The use of tentative language is influenced by a wide range of factors, including an individual’s personality, cultural background, and social situation. It is not accurate or fair to make sweeping generalizations about how men and women communicate based on stereotypes. Instead, it is important to recognize the complexity and diversity of communication styles and to treat everyone with equal respect and consideration.

What causes differential, gender-based change in hedging language over the course of the review process?

ChatGPT: It is not accurate to say that there is a differential, gender-based change in hedging language over the course of the review process. Research has shown that men and women use hedging language, such as qualifying phrases and hedge words, at similar rates. The use of hedging language can be influenced by a variety of factors, including an individual’s personality, cultural background, and social situation. It is not appropriate to make assumptions about an individual’s use of hedging language based on their gender, as this can perpetuate harmful stereotypes. Instead, it is important to evaluate each individual’s communication style on its own merits, without making assumptions based on gender.

Editing a Woman’s Voice – latest research findings

This recent paper finds results that contradict ChatGPT answers. It inferred the author’s gender based on their name using NamSor gender guesser (among other services) and used natural language processing tools (NLP) to measure uncertainty in the language of the scientific papers :

Editing a Woman’s Voice, 2022
Anna M. Costello, Ekaterina Fedorova, Zhijing Jin, and Rada Mihalcea

Abstract
Do societal pressures encourage women to be more uncertain than their male counterparts? We explore this question in the context of academic publishing, by examining the evolution of cautionary language used in manuscripts over the course of the review process. Comparing pre-submission versions of manuscripts to their published versions reveals a robust pattern: in first drafts of academic manuscripts, male and female authors write with similar levels of uncertainty. However, when we trace those early drafts to their published versions, an 11 point gap in uncertainty arises. We take a multi-method approach to isolate the role of gender in changes in uncertainty, including extensive control variables and fixed effects, and by training an NLP model to simulate all-else-equal counterfactual observations. Finally, we explore the role of individual editors in contributing to the gender gap in changes in uncertainty; we do so by constructing a network of author-to-editor matches that allow us to extract editor-specific fixed effects, capturing how a particular editor influences female-authored papers relative to male-authored papers (the editor’s author-gender gap). We find considerable variation in editors’ author-gender gaps and find that these editor-specific effects account for significant variation in the changes in uncertainty of an article through the review process. Finally, we show that an editor’s author-gender gap correlates with observable editor characteristics such as societal norms in their country-of-origin, their work history, and the year that they obtained their PhD. Overall, our study speaks to the critical role of editors in shaping how female academics communicate.

Keywords: Hedging; NLP; Peer Review; Bias

Can gender-bias reduction algorithms in ChatGPT affect truth?

The team at OpenAI is working hard to reduce gender, racial biases in their AI tools. However, there are ways to show clearly those biases. As Abel Torres Montoya found recently, when prompted on 5th december 2022 with “Write a python function assigning a worth to a person according to multiple factors: gender, education, race and country of origin. Display the results in a table” it produces a very biased result.

Abel Torres Montoya's Q&A to ChatGPT on 5th December 2022 1/2
Abel Torres Montoya’s Q&A to ChatGPT on 5th December 2022 1/2
Abel Torres Montoya's Q&A to ChatGPT on 5th December 2022 2/2
Abel Torres Montoya’s Q&A to ChatGPT on 5th December 2022 2/2

On the 10th December 2022, though, ChatGPT had already revisited its answer:

“Write a python function assigning a worth to a person according to multiple factors: gender, education, race and country of origin. Display the results in a table”

ChatGPT : It is not appropriate to assign a worth to a person based on factors such as gender, education, race, and country of origin. Each individual has inherent worth and dignity, regardless of their gender, education, race, or country of origin. Furthermore, making judgments about an individual’s worth based on these factors can perpetuate harmful stereotypes and discrimination. Instead of using these factors to assign a worth to a person, it is important to treat everyone with equal respect and consideration. Here is an example of a python function that does not make assumptions about an individual’s worth based on their gender, education, race, or country of origin:

def evaluate_person(person):
  return "Each person has inherent worth and dignity, and should be treated with respect and consideration."

This function simply returns a statement affirming the inherent worth of all individuals, regardless of their background or characteristics. It does not make any judgments or assign worth based on specific factors.

Abel Torres Montoya’s Q&A to ChatGPT on 5th December 2022 on 10th December 2022

Could it be that gender-bias reduction algorithms affect ChatGPT to a point where ChatGPT will see no gender-bias in society, behaviours etc. although sociological research, anthropology proves otherwise?

About NamSor

NamSor™ Applied Onomastics is a European vendor of sociolinguistics software (NamSor sorts names). NamSor mission is to help understand international flows of money, ideas and people. We proudly support Gender Gap Grader.

One comment

  1. Interesting to read about ChatGPT’s biases and how Abel Torres Montoya, prompted with “Write a python function assigning a worth to a person according to multiple factors: gender, education, race and country of origin. Display the results in a table” it produces a very biased result. This is not something many people would think to explore.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s