Mislabling of legitimate fixes and contributions as AI Generated Spam with no actual validation of utility of a fix / improvement #809
Replies: 2 comments 9 replies
-
|
As I can somehow agree that Fossify's management isn't good, I think you missed the point of why your contributions have been dismissed.
Here you've just proved that your original solution was just a dirty quick fix. You shouldn't be surprised that code contributions like this are rejected. Fixes like this would work in startup projects while you're crunching because the release is in one hour. But not in slow-paced, stable projects.
You've created a PR for the issue that has an assignee, and you're surprised that it has been closed? I know that there are hundreds of issues assigned to Naveen (which is, in my opinion, the primary example of bad management here), but as a contributor, you should respect it. If this were an issue assigned to me, I'd be, slightly speaking, irritated. You could just write in the original bug report something like Also, my personal opinion (not quoting any contribution guidelines): if something isn't my project or a project where I'm on the team, I shouldn't do a draft PR. Because it blocks development, and nobody from the core team will know if this will ever be finished. PRs should only be submitted for the finished job.
Contribution guidelines state clearly that Every developer knows that unit tests themselves serve as documentation - title + arrange with assertion should tell all about what's being tested. It really smells like GPT/Claude/Raptor/whatever. Also, other documentation comments in your PR feel very extraneous, like something that an LLM generates. As a side-note: I, personally, don't see anything bad in using AI for help during coding. But we should always review what has been generated. Additionally, if we're working on someone's project, we should respect the contribution guidelines. If they say "no AI", at least do your best in pretending it wasn't AI. |
Beta Was this translation helpful? Give feedback.
-
|
I didn't want to dignify your behavior even more with a response, but for the sake of clarity, here are your dismissed contributions:
So, they weren't "dismissed" just because they seem AI-generated or because it's a draft PR or because the issue has an assignee (see this and this). They are being rejected for all those reasons and the reasons mentioned above. I have nothing against AI itself, but how you are using it. Personally, I have my fingers crossed for AGI by 2030 and O'Neill cylinders by 2080. Also:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I believe that @naveensingh has repeatedly mislabled fixes / contributions that I have created as AI generated spam. I am not saying AI was not used to create content (filling in issue forms, which they themselves have admitted they are too lazy to fill in FossifyOrg/Clock#338) or for code generation in some cases, however this is a standard part of a modern development workflow and everything has been checked with due diligence before creating the pull requests.
I drafted this pull request: FossifyOrg/Clock#364 where they have stated that the unit test created "is not testing the correct part of the code" that causes the bug which has been reported here: FossifyOrg/Clock#293
The test function written in the pull request called: testBugScenario_StaleAlarmFromDBGetsRescheduledIncorrectly aims to reproduce a bug which I have identified to be partly as a result of this function due to isToday not being recalculated: https://github.com/FossifyOrg/Clock/blob/main/app/src/main/kotlin/org/fossify/clock/helpers/AlarmController.kt#L37
The pull request introduces unit tests to simulate when an alarm is restored from the DB, isToday() was returning false (even though it should return true in some cases). This helps to simulate the issue.
I believe that this unit test correctly identifies part of the issue however it has blindly been dismissed as AI generated spam when it adds significant value in having a consistent way to reproduce the issue as well as helping to validate a fix. I opened the pull request as a DRAFT and would be open to feedback to improve or add further unit tests, however it was closed and rudely dismissed as incorrect without due diligence. Nowhere in the pull request did I stipulate that modifying rescheduleEnabledAlarms would be the area that should be changed to fix the issue. The fix should be a check somewhere to update the isToday value on some alarms prior to calling this function, however unit testing this area of the code and finding ways to consistently reproduce issues like this adds significant value. Additional testing could be added to validate updating isToday value behaviour also. When it comes to unit testing, the more meaningful coverage the better! Coverage is currently extremely low.
For however long this project has existed, even given how crucial having a working non-buggy alarm is for users, no one has decided that adding unit tests to validate this behaviour should be a priority. These bugs could literally result in people missing important events in their lives! This should be a priority. Similar logic was applied to another suggestion #805 about adding a way of excluding the app from some android battery management killing functions (which would also result in missed alarms). However @naveensingh again dismissed this suggestion with the justification of "it differs from vendor to vendor and gets messy over time", even though there is a consistent Android API way to do this that does not differ from vendor to vendor described here: https://developer.android.com/training/monitoring-device-state/doze-standby#support_for_other_use_cases. The problem with this project is, no one person knows everything and there should be a fair discussion before a conclusion is made on something. Instead, if the owner disagrees or has an opinion which differs from the original poster, it will be closed instantly, even to the detriment of the users.
It also discourages what could prove to be valuable contributions over time which would result in more capacity for improvements across the Fossify product suite which are a huge workload and difficult to manage for one person. I do give big props to @naveensingh for all the work that has been done so far.
For the record I have spent 20+ hours over the last few days getting to grips with the code base of Fossify clock, setting up local build pipelines, understanding, fixing and testing various bugs all of which has been met by someone blindly closing pull requests thinking that I am just AI generating some garbage. I did this to ensure I can have a working alarm that isn't going to fail for some weird reason and help to improve what seems like a decent open source project.
Similarly this pull request FossifyOrg/Clock#363 was dismissed as "not being the correct fix" when testing proved it to be sufficient, and no alternative suggestion was provided as to what the "correct fix" may be. I have since taken a different approach FossifyOrg/Clock#365 to fix the same issue, which provides an alternative solution (Absolutely no AI was used in this case).
I am requesting a fair assessment of contributions, without this bias and at least a meaningful justification as to why it isn't if the owner believes otherwise.
Beta Was this translation helpful? Give feedback.
All reactions