You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The bad news: it's a 59MB package, and the limit on PyPI is 60MB. Most of the data in the package is currently the experiments folder. Can we take only part of it? As I understand it using the package and replicating the study are two different things. We might be able to give the entire functionality with only one experiment data files, aren't we?
Alternatively, there's a way to apply for a larger package size on PyPI. We can do that. Or even just wait until we hit the limit and then think of a solution. It's up to you :-)
The text was updated successfully, but these errors were encountered:
Just one thing to note: there are files in experiments that are necessary for using the tagger. Also, currently the user selects the experiment and configuration file while instantiating the tagger. So, we must keep at least part of this data, or even better, move a "default" configuration somewhere else and have it loaded by default if no configuration is passed to the tagger __init__ method.
The good news: deep_disfluency is on PyPI!
The bad news: it's a 59MB package, and the limit on PyPI is 60MB. Most of the data in the package is currently the
experiments
folder. Can we take only part of it? As I understand it using the package and replicating the study are two different things. We might be able to give the entire functionality with only one experiment data files, aren't we?Alternatively, there's a way to apply for a larger package size on PyPI. We can do that. Or even just wait until we hit the limit and then think of a solution. It's up to you :-)
The text was updated successfully, but these errors were encountered: