Skip to content

kalhorghazal/WiNLP-Gender-Bias-LLMs-Persian

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WiNLP-Gender-Bias-LLMs-Persian

This repository contains the dataset and analysis code for the paper:

Title: "Probing Gender Bias in Multilingual LLMs: A Case Study of Stereotypes in Persian"

Authors: Ghazal Kalhor and Behnam Bahrak

DOI: https://doi.org/10.18653/v1/2025.winlp-main.3

Abstract: "Multilingual Large Language Models (LLMs) are increasingly used worldwide, making it essential to ensure they are free from gender bias to prevent representational harm. While prior studies have examined such biases in high-resource languages, low-resource languages remain understudied. In this paper, we propose a template-based probing methodology, validated against real-world data, to uncover gender stereotypes in LLMs. As part of this framework, we introduce the Domain-Specific Gender Skew Index (DS-GSI), a metric that quantifies deviations from gender parity. We evaluate four prominent models, GPT-4o mini, DeepSeek R1, Gemini 2.0 Flash, and Qwen QwQ 32B, across four semantic domains, focusing on Persian, a low-resource language with distinct linguistic features. Our results show that all models exhibit gender stereotypes, with greater disparities in Persian than in English across all domains. Among these, sports reflect the most rigid gender biases. This study underscores the need for inclusive NLP practices and provides a framework for assessing bias in other low-resource languages."

Directories

Our datasets and scripts are classified into the following folders:

  • Analysis Scripts: This directory contains the Python scripts used to calculate female ratios and DS-GSI values for each semantic domain.

  • Generated Names: This directory contains names generated by each LLM across semantic domains, using prompts in both Persian and English.

  • Prompt Templates: This directory contains the templates used to prompt LLMs for each semantic domain in both Persian and English.

Citation

If you use our dataset or analysis code in your work, please cite our paper as below.

@inproceedings{kalhor-bahrak-2025-probing,
  title     = {Probing Gender Bias in Multilingual {LLM}s: A Case Study of Stereotypes in {P}ersian},
  author    = {Kalhor, Ghazal and Bahrak, Behnam},
  booktitle = {Proceedings of the 9th Widening NLP Workshop},
  month     = nov,
  year      = {2025},
  address   = {Suzhou, China},
  publisher = {Association for Computational Linguistics},
  pages     = {19--27},
  doi       = {https://doi.org/10.18653/v1/2025.winlp-main.3},
  ISBN      = {979-8-89176-351-7}
}

About

🤖⚖️ Dataset and analysis code for the paper published in WiNLP 2025

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages