Default precision #1397
Replies: 3 comments 3 replies
-
@nhartman94 I don't remember the explicit motivation for this, but I think it was because we wanted to have the user get exactly what they requested in terms of defaults as both TensorFlow and PyTorch default to >>> import tensorflow as tf
>>> tf_ones = tf.ones(10)
>>> tf_ones
<tf.Tensor: shape=(10,), dtype=float32, numpy=array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], dtype=float32)>
>>> tf_ones.dtype >>> import torch
>>> torch_ones = torch.ones(10)
>>> torch_ones
tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
>>> torch_ones.dtype
torch.float32 Though so does jax >>> from jax import numpy as jnp
>>> jax_ones = jnp.ones(10)
>>> jax_ones
DeviceArray([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], dtype=float32)
>>> jax_ones.dtype
dtype('float32') but we there default to >>> import pyhf
>>> pyhf.set_backend("jax")
>>> pyhf.tensorlib.precision
'64b' so that's not super consistent. I've opened up Issue #1398 for this. |
Beta Was this translation helpful? Give feedback.
-
More an interesting idea than a practical suggestion, but is there some way to overload the backend so that it keeps track of all the operations that are runs between tensors and you see how often you get |
Beta Was this translation helpful? Give feedback.
-
With PR #1400 in @nhartman94's original question will be resolved when |
Beta Was this translation helpful? Give feedback.
-
Hi, I was wondering if there was a motivation to use 32b vs 64b for the pytorch backend?
I was asking because I was just comparing the backends, and I needed to change the precision to 64b for the pytorch backend to get agreement with the numpy back end.
We plan to go ahead with 64b precision for our anlaysis, but I was just generically curious what the motivation for the 32b precision for pytorch backend was.
Below is just a little screen shot of the differences in the the test statistics I was seeing with the same histograms:
Beta Was this translation helpful? Give feedback.
All reactions