Skip to content

Feature Request: Add float64 support for CPU backend in TensorFlow.js (Node.js)Β #8634

@borodadada

Description

@borodadada

πŸ“ Description

βœ… System information
TensorFlow.js version: 4.x.x (latest, using @tensorflow/tfjs-node)
Are you willing to contribute? Yes β€” I can help test, provide examples, or review PRs.

🧩 Describe the feature and current behavior
Currently, TensorFlow.js (including tfjs-node) uses float32 exclusively for all tensor operations β€” even on CPU. This leads to a hard precision limit:

  • Machine epsilon for float32 β‰ˆ 1.19 Γ— 10⁻⁷
  • Loss values below ~5Γ—10⁻⁷ become unreliable due to rounding artifacts, not real improvement.
  • For scientific, financial, or high-precision regression tasks (e.g., predicting values like 100.01 with guaranteed 2+ decimal digits), this is a hard barrier.

While JavaScript/Node.js natively supports float64, TF.js forces conversion to float32 on tensor creation β€” making it impossible to achieve true double-precision computation even when the hardware and runtime support it.

πŸ” Will this change the current API? How?
Yes β€” but minimally and backward-compatible:

  • Introduce a new dtype: tf.float64 (like tf.float32, tf.int32)
  • Allow explicit dtype in tensor creation:
tf.tensor([1.23], undefined, 'float64');
// or
tf.tensor2d(data, [m,n], 'float64');
  • Extend model compilation to accept dtype: 'float64' (optional, default remains 'float32')
  • Backend-specific: CPU backend would use Float64Array; WebGL backend would still fallback to float32 (with warning).

No breaking changes β€” existing code continues to work unchanged.

πŸ‘₯ Who will benefit?

  • Researchers & engineers needing >7-digit precision (e.g., physics simulations, high-frequency finance, numerical solvers).
  • Users of TF.js for non-ML tasks (interpolation, curve fitting) where float32 noise corrupts results.
  • Anyone hitting the loss < 1e-7 wall and forced to hack-scale data just to avoid numerical collapse.

πŸ’‘ Additional context
We’re not asking for GPU float64 (WebGL limitation is understood).
We only request CPU backend (tfjs-node) support for float64, which is technically feasible and already used in other JS numeric libraries (e.g., math.js, decimal.js, numeric.js).
This would make TF.js truly competitive for scientific computing in Node.js β€” not just browser ML prototyping.

Minimal Reproduction Code

const tf = require('@tensorflow/tfjs-node');

// Original value with high precision (native JS uses float64)
const originalValue = 1e-8; // 0.00000001
console.log('JS number (float64):', originalValue); // β†’ 1e-8

// Convert to tensor (automatically downgraded to float32)
const tensor = tf.scalar(originalValue);
const recovered = tensor.dataSync()[0];

console.log('After TF.js tensor (float32):', recovered); // β†’ often 0 or 1.00000001e-8

// Try to compute a small difference
const a = tf.scalar(1e-8);
const b = tf.scalar(2e-8);
const diff = tf.sub(b, a); // Should be exactly 1e-8
console.log('1e-8 difference in TF.js:', diff.dataSync()[0]); // β†’ often 0!

// Compare with pure JavaScript (float64)
const jsDiff = 2e-8 - 1e-8;
console.log('Pure JS (float64) difference:', jsDiff); // β†’ 1e-8 (exact)

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions