You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Currently, Fossil Test does not offer built-in support for testing text-based content for inappropriate language, such as curse words or meme slang. This is particularly important for applications involving user-generated content, chatbots, or community platforms, where preventing offensive language is crucial. I’m frustrated when I have to manually implement solutions to filter or identify inappropriate language during testing.
Describe the solution you’d like
I would like Fossil Test to include features for detecting and testing text content for:
• Curse words, including a configurable list of explicit words or phrases that can be flagged.
• Meme slang, such as commonly used terms or phrases that might not be explicitly offensive but are frequently used in internet culture (e.g., “yeet”, “sus”, “pog”, etc.).
• A mechanism for flagging specific patterns in text (e.g., repeated characters or phrases that commonly appear in toxic language or trolling).
• A way to customize the list of detected terms based on project needs or community guidelines.
• Options to test if the application properly handles offensive language or meme slang by rejecting, flagging, or neutralizing the content.
Describe alternatives you’ve considered
• Using external libraries or tools for profanity filtering, but these solutions often do not integrate smoothly into the testing framework.
• Writing custom regular expressions or manual checks, but these approaches can be error-prone and require constant maintenance.
Additional context
This feature would be particularly useful for projects where user interaction with text-based interfaces is prevalent, ensuring that offensive or inappropriate language is caught early in the testing process. It could be extended to include contextual understanding, making it possible to flag slang that may not be offensive in some cases but is inappropriate in others.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Currently, Fossil Test does not offer built-in support for testing text-based content for inappropriate language, such as curse words or meme slang. This is particularly important for applications involving user-generated content, chatbots, or community platforms, where preventing offensive language is crucial. I’m frustrated when I have to manually implement solutions to filter or identify inappropriate language during testing.
Describe the solution you’d like
I would like Fossil Test to include features for detecting and testing text content for:
• Curse words, including a configurable list of explicit words or phrases that can be flagged.
• Meme slang, such as commonly used terms or phrases that might not be explicitly offensive but are frequently used in internet culture (e.g., “yeet”, “sus”, “pog”, etc.).
• A mechanism for flagging specific patterns in text (e.g., repeated characters or phrases that commonly appear in toxic language or trolling).
• A way to customize the list of detected terms based on project needs or community guidelines.
• Options to test if the application properly handles offensive language or meme slang by rejecting, flagging, or neutralizing the content.
Describe alternatives you’ve considered
• Using external libraries or tools for profanity filtering, but these solutions often do not integrate smoothly into the testing framework.
• Writing custom regular expressions or manual checks, but these approaches can be error-prone and require constant maintenance.
Additional context
This feature would be particularly useful for projects where user interaction with text-based interfaces is prevalent, ensuring that offensive or inappropriate language is caught early in the testing process. It could be extended to include contextual understanding, making it possible to flag slang that may not be offensive in some cases but is inappropriate in others.
The text was updated successfully, but these errors were encountered: