-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
3 failing test examples that previously worked with AntiSamy 1.7.3. #388
base: main
Are you sure you want to change the base?
Conversation
…rent for some tainted input strings than for previous version of AntiSamy 1.7.3.
@spassarop - Are we going to do anything about this? Or did you already, like add notes to the README or something? |
I did: #389 (comment) The parser keeps changing and we are dependent on their updates :( |
@kwwall had a look at your test cases and made another fix for neko. The tag name parsing is now much closer to the spec/real browsers and your test cases are now test cases for neko also. see HtmlUnit/htmlunit-neko@55053e4 additionally a new neko 3.11.0-SNAPSHOT is available if you like to test. |
@spassarop - I tested the new 3.11.0-SNAPSHOT and AntiSamy fails 3 tests like so: [ERROR] Errors: Can you determine if our test cases needed to be changed to pass somehow? Or is a fix needed in the updates to NekoHtml? @rbri - Maybe you want to look into this too. |
ok, can reproduce this but i have to think about..... |
It looks like from our side it's ok. I mean, our test outputs never got
that kind of exception before. Anyway, Ronald seems to have it at least
identified.
El jue, 18 ene 2024 a las 14:48, RBRi ***@***.***>) escribió:
… ok, can reproduce this but i have to think about.....
—
Reply to this email directly, view it on GitHub
<#388 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHL3BMKVTFTEHW2NZ4NXGN3YPFN7PAVCNFSM6AAAAAA7G3ARW6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJYHE2DKMBWGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
SNAPSHOT is updated again Current status
|
@kwwall - your tests are still failing but i used your test case to improve the tag name scanning/parsing testAntiSamyRegressionCDATAWithJavascriptURL
Now you got
the parsed dom tree is now like in real browsers testOnfocusAfterStyleClosing
Now you got
the parsed dom tree is now like in real browsers testScriptTagAfterStyleClosing
Now you got
the parsed dom tree is now like in real browsers |
At least from my site this is also the plan for the future - there is still a lot to do to be more and more in sync with the browser parsers. |
@rbri - I looked at this and I'm fine with it as it is. I've already noted in our ESAPI release notes that sanitized results may be different than they previously work. As long as it sanitizes safely, it's all good and it does that. I probably will rewrite these 3 JUnit tests to just confirm that the dangerous portions have been properly cleansed. I explained my motivation for noting noting in issue #389 in #389 (comment). |
@kwwall @spassarop - what is the status of this PR? Close it, work still to be done, ??? |
@davewichers - Let me retest with the latest ESAPI release and get back to you. But I think it had been fixed and just didn't get closed. Will let you know by noon tomorrow. |
@davewichers and @spassarop - Okay, I looked at this, and from ESAPI's perspective, we just adjusted our JUnit tests in ESAPI's at: to agree with whatever 1.7.5 was producing. So in that specific sense, I think you can close this PR with a suitable comment. (Although, I guess I thought you may want to incorporate these test cases into your own AntiSamy JUnit regression tests. Maybe @spassarop has already done that; I haven't bothered to check.) However, I also think this points to a larger problem and that is how do we test sanitized output? The way that I generally try to do so is that I run the tainted output through AntiSamy's DOM parser and look at the output. Something like System.out.println( (new AntiSamy).scan(tainted, esapi_policy, AntiSamy.DOM).getCleanHTML() ); and look at the output and then use it to write a JUnit test. When possible, I try to write the JUnit test to see if the tainted payload is still present, but one can't always do that (e.g., looking for the absence of I really don't know what to do for such cases though. And I expect that this affects a lot more than just ESAPI and its But given that the the cleansed AntiSamy output for any specific input is not constant (and I'm not debating that there are sensible reasons for that), it seems that this problem will be popping up now and again. In fact, over the history of AntiSamy, it's probably popped up a lot more than we've observed because even between AntiSamy and ESAPI teams, we likely are only testing some of the obvious edge cases. I'm not sure what to do about this larger problem though. This GitHub PR and the associated issue doesn't really do that larger problem justice. I guess you can add a blurb to your README that this just should be expected now and again on an ongoing basis, and ask people report specific edge cases that they encounter when it changes between AntiSamy releases so you can note it in your release notes, but more importantly, use it to improve your regression test cases. But beyond that, I have no suggestions. Maybe you want to consider that (noting it in the README) before closing this GitHub PR. I don't know. It's your call. I only reported this because I figured that you'd want to know, but maybe you don't care and only want things reported when they possibly could result in a vulnerability, |
@spassarop - do you want to propose an update to the README and then we can merge that and close this issue? |
Run 'mvn test' and observe the 3 failing tests. The question is not how to fix this, but is there an explanation of why this is the case and can you perhaps conjecture as to what sort of tainted input would lead to this.
See GitHub issue #389 for details regarding expectations.