You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We currently benchmark PHI annotators on 5 different PHI annotation tasks using two datasets, the 2014 i2b2 dataset and the MCW dataset. The performance of the tools submitted achieve sometime the same performance on both datasets and sometimes the performance is quite large.
The goal of this task is to compare the performance of the tools on these two datasets. One hypothesis is that the difference of performance is due to a difference in the number of annotation of a given type. This difference may be due to the type of clinical notes used or to a difference in the annotation protocol.
Workflow
Write a notebook in this GH repository that captures the required information (@yy6linda)
Run the notebook to get data from Sage data node (@yy6linda ) and from the MCW data node (@gkowalski )
@yy6linda let me know when you have a notebook to run that measures performance. But this exercise proves to myself I have the R Studio environment and tunnels to the data-node set up properly.
We currently benchmark PHI annotators on 5 different PHI annotation tasks using two datasets, the 2014 i2b2 dataset and the MCW dataset. The performance of the tools submitted achieve sometime the same performance on both datasets and sometimes the performance is quite large.
The goal of this task is to compare the performance of the tools on these two datasets. One hypothesis is that the difference of performance is due to a difference in the number of annotation of a given type. This difference may be due to the type of clinical notes used or to a difference in the annotation protocol.
Workflow
The text was updated successfully, but these errors were encountered: