JIT compilation? #70
Labels
improvement
Something which would improve current status, but not add anything new
investigation
Something which might require a careful study
medium priority
Not urgent but should be dealt with sooner rather than later
PyTorch has the ability to Just In Time compile stuff to make it run quicker and be more memory efficient. I'd tried to do this a while ago with
@weak_script
and@weak_module
decorators, however they didn't seem to do much and I had trouble automatically generating the docs. I then found that PyTorch recommended that users not use these decorators. Since then PyTorch apparently introduced@torch.jit.script
decorators, which are for user use and supposedly provide noticeable improvements in speed and memory usage.Examples could be for compiling activation functions:
Whereas LUMIN's implementation of Swish is simply:
x*torch.sigmoid(x)
. Other possibilities could be in LUMIN's loss function (e.g.WeightedMSE
). I'm not sure how far one can take this; should all things related to PyTorch be JIT complied, or perhaps only operations on tensors?A starting point would be test out the JIT compiled Swish against the current version, and then to try to find out more about what should be JITed, and what doesn't.
The text was updated successfully, but these errors were encountered: