Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Testing for Curse Words and Meme Slang in Fossil Test #51

Open
dreamer-coding opened this issue Jan 4, 2025 · 0 comments

Comments

@dreamer-coding
Copy link
Collaborator

Is your feature request related to a problem? Please describe.
Currently, Fossil Test does not offer built-in support for testing text-based content for inappropriate language, such as curse words or meme slang. This is particularly important for applications involving user-generated content, chatbots, or community platforms, where preventing offensive language is crucial. I’m frustrated when I have to manually implement solutions to filter or identify inappropriate language during testing.

Describe the solution you’d like
I would like Fossil Test to include features for detecting and testing text content for:
• Curse words, including a configurable list of explicit words or phrases that can be flagged.
• Meme slang, such as commonly used terms or phrases that might not be explicitly offensive but are frequently used in internet culture (e.g., “yeet”, “sus”, “pog”, etc.).
• A mechanism for flagging specific patterns in text (e.g., repeated characters or phrases that commonly appear in toxic language or trolling).
• A way to customize the list of detected terms based on project needs or community guidelines.
• Options to test if the application properly handles offensive language or meme slang by rejecting, flagging, or neutralizing the content.

Describe alternatives you’ve considered
• Using external libraries or tools for profanity filtering, but these solutions often do not integrate smoothly into the testing framework.
• Writing custom regular expressions or manual checks, but these approaches can be error-prone and require constant maintenance.

Additional context
This feature would be particularly useful for projects where user interaction with text-based interfaces is prevalent, ensuring that offensive or inappropriate language is caught early in the testing process. It could be extended to include contextual understanding, making it possible to flag slang that may not be offensive in some cases but is inappropriate in others.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant