You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I got this error when running nlpsandboxclient.client.annotate_note on i2b2 test dataset(514 notes) using the notebook.
However, no errors when running the same code on the example data (5 notes only)
Do we have an upper limit set for how many notes annotators can process?
The text was updated successfully, but these errors were encountered:
Do we have an upper limit set for how many notes annotators can process?
I'm assuming your are using one of the spark tool, correct? Which tool exactly?
I don't know how the developers built their model but I think that the limit is unlikely.
A 500 error indicates that the API service (here the tool) experienced an unexpected issue. One of the reasons I can think of is because the input sent to the tool triggers some error. Does the tool systematically fail on a specific note?
The Spark tools are implemented in Python, so you could go inside the container that run the tool (not nginx) and add debug prints. Alternatively, I recommend using a tool that is fully open source like neuroner.
I ran the example PHI-annotator using the notebook but I didn't encounter this error. I was able to run the example PHI-annotator on all notes in the test dataset(514 notes in total). This error shows up when running the sparknlp model. There might be a limitation for the number of notes to process in the sparknlp model.
I got this error when running nlpsandboxclient.client.annotate_note on i2b2 test dataset(514 notes) using the notebook.
However, no errors when running the same code on the example data (5 notes only)
Do we have an upper limit set for how many notes annotators can process?
The text was updated successfully, but these errors were encountered: