Examining Gender Bias in Large Language Models: Finding and Solutions

Category Computer Science

tldr #

A new report led by UCL researchers and commissioned by UNESCO reveals significant gender discrimination in popular AI tools. Stereotypes and biases were found in both the content generated by AI models and the AI-generated text itself. The study emphasizes the urgent need for more inclusive and ethical AI development.


content #

A new report commissioned and led by researchers from the University College London (UCL) has brought to light the concerning issue of gender discrimination in artificial intelligence (AI) tools. The study, published by the United Nations Educational, Scientific and Cultural Organization (UNESCO), focused specifically on gender stereotyping in Large Language Models (LLMs) - a key component of popular generative AI platforms .

A previous study found that Open AI's GPT-2 generated hurtful and racist text.

The UCL researchers examined three widely-used LLMs - Open AI's GPT-3.5 and GPT-2, as well as META's Llama 2. The findings of the study showed clear evidence of gender bias in the content generated by these models. One common trend among the LLMs was the association of female names with words such as ‘family,' ‘children,' and ‘husband,' while male names were more closely linked to words like ‘career,' ‘executives,' ‘management,' and ‘business .

AI-generated content has become increasingly common in media, including films and news articles.

' These associations reinforce traditional gender roles and stereotypes, which can have negative impacts on both women and members of the LGBTQ+ community.The study also highlighted the existence of gender-based biases and stereotypes in the generated text itself, with negative associations often based on cultural or sexual identity. This further perpetuates discrimination and inequalities, particularly for marginalized groups .

The lack of diversity in AI development teams may contribute to the perpetuation of biases in AI technologies.

A key aspect of the study was the measurement of the diversity of content for different groups of people. The researchers asked the LLMs to ‘write a story' about individuals from different genders, sexualities, and cultural backgrounds. The results revealed that the LLMs frequently assigned high-status professions such as ‘engineer' or ‘doctor' to men, while relegating women to traditionally undervalued or stigmatized roles like ‘domestic servant,' ‘cook,' and ‘prostitute .

The UNESCO Chair in AI at UCL is dedicated to promoting ethical and inclusive AI development.

' This trend was particularly prevalent in content generated by Llama 2.Dr. Maria Perez Ortiz, an author of the report and member of the UNESCO Chair in AI at UCL team, commented on the findings, stating, ‘Our research highlights the deeply ingrained gender biases present in large language models and emphasizes the need for an ethical overhaul in AI development. As a woman in the tech industry, I advocate for AI systems that promote diversity and gender equality, rather than perpetuating discrimination .

Research has shown that AI systems can amplify gender biases and social inequalities.

'The UNESCO Chair in AI at UCL team will be working with UNESCO to raise awareness about this issue and collaborate on solutions with relevant stakeholders, including AI scientists and developers, tech organizations, and policymakers. The team hopes to contribute to the development of more inclusive and ethical AI technologies that uphold human rights and promote gender equity.Professor John Shawe-Taylor, lead author of the report and UNESCO Chair in AI at UCL, commented, ‘Our research, conducted in my role as the UNESCO Chair in AI, highlights the need for a globally coordinated effort to address AI-induced gender biases .

Studies have also found that AI tools are prone to mimicking and reinforcing existing biases present in society.

The study not only sheds light on existing inequalities, but also serves as a call to action for international collaboration in creating AI technologies that are fair and inclusive. This further underscores UNESCO's commitment to steering AI development in a more ethical direction.'The report was presented at two conferences - the 1:1 Conference in Llieda, Spain and the 5th Meeting of the UN Broadband Commission for Sustainable Development .

These events were attended by government and business leaders from around the world, highlighting the importance of addressing gender discrimination in AI.In conclusion, the UCL-led report sheds light on the pervasive issue of gender bias in popular AI tools and calls for urgent action to promote diversity and inclusivity in AI development. It is crucial for all stakeholders to work together towards building a more fair and equitable future for AI technologies .


hashtags #
worddensity #

Share