Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NaN for [N, C, 1, W] interpolate inputs with bilinear and bicubic modes for burn-wgpu and burn-ndarray backends #2080

Closed
antimora opened this issue Jul 30, 2024 · 2 comments · Fixed by #2224
Labels
bug Something isn't working help wanted Extra attention is needed wgpu Related to WGPU backend

Comments

@antimora
Copy link
Collaborator

antimora commented Jul 30, 2024

Describe the bug
NaN values when the input tensor is [N, C, 1, W] shape.

To Reproduce

  1. Enable test_1d_bilinear test in crates/burn-tensor/src/tests/module/bilinear_interpolate.rs and test_1d_bicubic in crates/burn-tensor/src/tests/module/bilinear_interpolate.rs which are ignored for now. (Copy of tests are in the comments)
  2. cd burn-wgpu && cargo test
  3. cd burn-ndarray && cargo test
  4. Enable resize_with_scales_1d_linear test in crates/burn-import/onnx-tests/tests/onnx_tests.rs which is ignored.

NOTE:

These tests work under burn-tch and pytorch.

Related Issues:

  1. Interpolate tensor operation (Inference Only)  #1246
  2. Add 1d and 2d modules for interpolate with scaling (also fix ONNX Resize op) #2081
@antimora antimora added the bug Something isn't working label Jul 30, 2024
@antimora
Copy link
Collaborator Author

Ignored tests just in case the code gets deleted:

    #[test]
    #[ignore = "https://github.com/tracel-ai/burn/issues/2080"]
    fn test_1d_bicubic() {
        // Initialize the model without weights (because the exported file does not contain them)
        let device = Default::default();

        // Run the model
        let input = TestTensor::<3>::from_floats(
            [[[1.5410, -0.2934, -2.1788, 0.5684, -1.0845, -1.3986]]],
            &device,
        );

        let input = input.unsqueeze_dim(2);

        let output = interpolate(
            input,
            [1, 9],
            InterpolateOptions::new(InterpolateMode::Bicubic),
        );

        assert_eq!(output.dims(), [1, 1, 1, 9]);

        // assert output data does not contain NaN
        assert!(
            !output
                .clone()
                .to_data()
                .as_slice::<f32>()
                .unwrap()
                .iter()
                .any(|&x| x.is_nan()),
            "interpolate output contains NaN"
        );

        TestTensor::<4>::from([[[[
            1.541, 0.5747652, -1.010614, -2.197787, -0.8269969, 0.59609234, -0.5803058, -1.3792794,
            -1.3986,
        ]]]])
        .to_data()
        .assert_approx_eq(&output.into_data(), 3);
    }


    #[test]
    #[ignore = "https://github.com/tracel-ai/burn/issues/2080"]
    fn test_1d_bilinear() {
        // Initialize the model without weights (because the exported file does not contain them)
        let device = Default::default();

        // Run the model
        let input = TestTensor::<3>::from_floats(
            [[[1.5410, -0.2934, -2.1788, 0.5684, -1.0845, -1.3986]]],
            &device,
        );

        let input = input.unsqueeze_dim(2);

        let output = interpolate(
            input,
            [1, 9],
            InterpolateOptions::new(InterpolateMode::Bilinear),
        );

        assert_eq!(output.dims(), [1, 1, 1, 9]);

        // assert output data does not contain NaN
        assert!(
            !output
                .clone()
                .to_data()
                .as_slice::<f32>()
                .unwrap()
                .iter()
                .any(|&x| x.is_nan()),
            "interpolate output contains NaN"
        );

        TestTensor::<4>::from([[[[
            1.541f32,
            0.39450002,
            -0.76475,
            -1.943125,
            -0.80520004,
            0.36178753,
            -0.671275,
            -1.2022874,
            -1.3986,
        ]]]])
        .to_data()
        .assert_approx_eq(&output.into_data(), 3);
    }

@antimora
Copy link
Collaborator Author

CC @laggui @louisfd

@antimora antimora added the wgpu Related to WGPU backend label Aug 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed wgpu Related to WGPU backend
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant