-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark against LanguageTool #6
Comments
Interesting! I guess my task is now to prove that LT can be faster :-) What settings did you use, i.e. how is the Java LT process started? |
Also, the JVM takes quite some time to become fast, so the first maybe 20-100 calls per language should be ignored for the performance test (unless you want to test start-up speed, but we know Java is slow in that case). |
Great. The benchmark code is here: https://github.com/bminixhofer/nlprule/blob/master/bench/__init__.py I use the language-tool-python package and manually disabled spellchecking for English and German. I'd like to keep using Python bindings for LT to easily compare the suggestion content. I think it's reasonably fair, there shouldn't be much overhead from the Python side.
Right, I'll check if that makes an impact. |
I wanted to give it a try, but I get:
Any idea? |
Do you have an internet connection in your environment? Alternatively you can try manually downloading the binaries and replacing the self.tokenizer = nlprule.Tokenizer(f"{lang_code}_tokenizer.bin")
self.rules = nlprule.Rules(f"{lang_code}_rules.bin", self.tokenizer) (assuming you downloaded the binaries to the same directory you are running the script from) |
Indeed downloading from github was very slow and your workaround helped. This is what I get with some changes to the setup and LT 5.2:
EN
DE
EN + DE Java Rules
|
Exciting! Thanks for the adjustments. I can not quite reproduce these numbers for German: (base) bminixhofer@pop-os:~/Documents/Projects/nlprule$ python bench/__init__.py --lang de
100%|██████████████████████████████████████████████████████████████████████████████████████████████| 10000/10000 [00:47<00:00, 210.58it/s]
LanguageTool time: 26.881s
NLPRule time: 19.982s
n LanguageTool suggestions: 347
n NLPRule suggestions: 314
n same suggestions: 307 But I can for English: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 10000/10000 [01:18<00:00, 127.95it/s]
LanguageTool time: 26.802s
NLPRule time: 50.398s
n LanguageTool suggestions: 273
n NLPRule suggestions: 247
n same suggestions: 237 I:
and this command to start the server (LT 5.2): So it seems there's still a lot of potential for improvement for NLPRule, I'll see if I can add some optimizations, especially related to caching :) Are you using Python 3.8? I noticed that |
I'm using Python 3.8.3. Here's a slightly more complete config file, maybe the other cache settings are more important than I thought (the file I actually used has even more settings, but these require more setup and I don't think they improve performance):
|
Hmm those additional settings don't visibly change anything. But the English speed is enough to go off of so it's not that important, I'll try it on my Mac a bit later to get a third number. |
Alright, turns out there really was lots of potential for improvement, a lot of easy fixes and some better precomputation leads to these results on my PC:
and on my Mac:
With a still pessimistic (because there is overhead from disambiguation / chunking too) but more realistic correction factor of
These changes are not yet released but I'll release v0.3.0 with them today. I also updated the benchmark code with your adjustments from above. I hope this is a fair comparison now. |
Do you think some of these optimizations can be ported back to LT, or are they specific to your implementation? I didn't look at your code yet, so: did you basically port the existing Java algorithm or did you start from scratch? |
I started from scratch with a bottom-up approach of incrementally adding more features and always checking if the rules using those features pass the tests. That was possible because the LT rules are really well tested and there is a lot of them so "overfitting" to the tests was not an issue. I barely looked at your code in LT. So it is possible that there are some optimizations that could be ported. One cool thing I've done is treating the POS tags as a closed set and precomputing all of the matches at buildtime (so I never have to evaluate a POS regex at runtime) and something similar for the word regexes. Besides that there is not many specific optimizations that stand out. After looking a bit more into the differences between C++ / Rust and Java / C# I wouldn't want to be quoted saying "Rust >> Java" in terms of speed but I think in this case not having GC and Rust generally being more low-level could play a role. |
I'll see if I or my colleagues find time to confirm your performance evaluation. BTW, did you run the unit tests for all languages yet? |
Which unit tests? The tests I am running are the tags in the XML files. Others would be difficult to run because they are written in Java right? Also v0.3.0 is now released so feel free to try rerunning the benchmark. |
Yes, I meant the |
Oh, I'm only checking them for German and English. It would be good to check for other languages but so far I haven't got around to it. It's not that easy because there is some language specific code in LT like compound splitting for German which I have to take into account. |
You should also compare CPU and RAM usage while the task runs I'd like to ask @danielnaber what server specs you have on LanguageTool.org? And how do you scale it? Do your customers share resources on a dedicated machine or do you deploy a new machine for each customer? In any case, i'd expect NLPRule to be more efficient on that front, but we need to do some profiling to prove that |
There is now a benchmark in
bench/__init__.py
. It computes suggestions from LanguageTool via language-tool-python and NLPRule on 10k sentences from Tatoeba and compares the times.Heres's the output for German:
and for English:
I disabled spellchecking in LanguageTool.
LT gives more suggestions because NLPRule does not support all rules.
Not all NLPRule suggestions are the same as LT. Likely because of differences in priority but I'll look a bit closer into that.
Correcting for the Java rules in LT and that NLPRule only supports 85-90% of LT rules by dividing the NLPRule time by
0.8
and normalizing this gives the following table:These numbers are of course not 100% accurate but should at least give a ballpark estimate of performance.
I'll keep this issue open for discussion / improving the benchmark.
The text was updated successfully, but these errors were encountered: