Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DO NOT MERGE] Restore non-inlined SDXL #668

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

JingyaHuang
Copy link
Collaborator

What does this PR do?

The is a PR for restoring the weights/neff non-inlined sdxl compilation (and perhaps put the default back to non-inlined if the performance is also improved) when aws-neuron/aws-neuron-sdk#859 is solved.

(it's not yet patched in neuron SDK 2.19.1)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@JingyaHuang
Copy link
Collaborator Author

***** Compiling text_encoder *****
Using Neuron: --auto-cast matmul
Using Neuron: --auto-cast-type bf16
..
Compiler status PASS
[Compilation Time] 32.61 seconds.
***** Compiling text_encoder_2 *****
Using Neuron: --auto-cast matmul
Using Neuron: --auto-cast-type bf16
....
Compiler status PASS
[Compilation Time] 81.96 seconds.
***** Compiling unet *****
Using Neuron: --auto-cast matmul
Using Neuron: --auto-cast-type bf16
................................................
Compiler status PASS
[Compilation Time] 1026.33 seconds.
***** Compiling vae_encoder *****
Using Neuron: --auto-cast matmul
Using Neuron: --auto-cast-type bf16
..................
Compiler status PASS
[Compilation Time] 365.83 seconds.
***** Compiling vae_decoder *****
Using Neuron: --auto-cast matmul
Using Neuron: --auto-cast-type bf16
...................................
Compiler status PASS
[Compilation Time] 692.94 seconds.
[Total compilation Time] 2199.66 seconds.
Loading only U-Net into both Neuron Cores...
2024-Jul-30 14:22:30.244701 65191:65191 ERROR  TDRV:dmem_alloc_internal                     Failed to alloc DEVICE memory: 10485760
2024-Jul-30 14:22:30.261865 65191:65191 ERROR  TDRV:dml_dump                                Wrote nrt memory alloc debug info to /tmp/nrt_mem_log_device_0_66a8f726.csv
2024-Jul-30 14:22:30.267120 65191:65191 ERROR  TDRV:log_dev_mem                             Failed to allocate 10.000MB (usage: tensors) on ND 0:NC 0, current utilization:
        * total: 15.973GB
        * model code: 462.750MB
        * model constants: 384.609KB
        * tensors: 12.060GB
        * shared scratchpad: 2.000GB
        * runtime: 1.094KB
        * dma rings: 1.460GB

2024-Jul-30 14:22:30.281625 65191:65191 ERROR  TDRV:tensor_allocate                         Failed to allocate 10485760 bytes on DEVICE for tensor UNKNOWN.
Traceback (most recent call last):
  File "test_non_inline.py", line 13, in <module>
    stable_diffusion = NeuronStableDiffusionXLPipeline.from_pretrained(
  File "/home/ubuntu/pyvenv/aws_neuron_venv_2.19.1/lib/python3.8/site-packages/optimum/modeling_base.py", line 402, in from_pretrained
    return from_pretrained_method(
  File "/home/ubuntu/optimum-neuron/optimum/neuron/utils/require_utils.py", line 51, in wrapper
    return func(*args, **kwargs)
  File "/home/ubuntu/optimum-neuron/optimum/neuron/modeling_diffusion.py", line 714, in _from_transformers
    return cls._export(*args, **kwargs)
  File "/home/ubuntu/optimum-neuron/optimum/neuron/utils/require_utils.py", line 51, in wrapper
    return func(*args, **kwargs)
  File "/home/ubuntu/optimum-neuron/optimum/neuron/modeling_diffusion.py", line 954, in _export
    return cls._from_pretrained(
  File "/home/ubuntu/optimum-neuron/optimum/neuron/utils/require_utils.py", line 51, in wrapper
    return func(*args, **kwargs)
  File "/home/ubuntu/optimum-neuron/optimum/neuron/modeling_diffusion.py", line 672, in _from_pretrained
    pipe = cls.load_model(
  File "/home/ubuntu/optimum-neuron/optimum/neuron/utils/require_utils.py", line 51, in wrapper
    return func(*args, **kwargs)
  File "/home/ubuntu/optimum-neuron/optimum/neuron/modeling_diffusion.py", line 398, in load_model
    submodels["unet"] = dp_cls(
  File "/home/ubuntu/pyvenv/aws_neuron_venv_2.19.1/lib/python3.8/site-packages/torch_neuronx/xla_impl/data_parallel.py", line 216, in __init__
    self.loaded_modules = self._load_modules(self.module)
  File "/home/ubuntu/optimum-neuron/optimum/neuron/modeling_diffusion.py", line 1342, in _load_modules
    torch_neuronx.move_trace_to_device(loaded_modules[i], nc_index)
  File "/home/ubuntu/pyvenv/aws_neuron_venv_2.19.1/lib/python3.8/site-packages/torch_neuronx/xla_impl/trace.py", line 765, in move_trace_to_device
    trace.weights._parameters[name] = param.to(f"privateuseone:{device_id}")
RuntimeError: nrt_tensor_allocate status=4
2024-Jul-30 14:22:31.081454 65191:69012 ERROR  TDRV:dmem_alloc_internal                     Failed to alloc DEVICE memory: 1549824
2024-Jul-30 14:22:31.098508 65191:69012 ERROR  TDRV:dml_dump                                Wrote nrt memory alloc debug info to /tmp/nrt_mem_log_device_0_66a8f727.csv
2024-Jul-30 14:22:31.103843 65191:69012 ERROR  TDRV:log_dev_mem                             Failed to allocate 1.478MB (usage: dma rings) on ND 0:NC 0, current utilization:
        * total: 15.979GB
        * model code: 462.750MB
        * model constants: 481.109KB
        * tensors: 12.060GB
        * shared scratchpad: 2.000GB
        * runtime: 1.094KB
        * dma rings: 1.466GB

2024-Jul-30 14:22:31.130734 65191:69012 ERROR  TDRV:dma_ring_alloc                          Failed to allocate TX ring
2024-Jul-30 14:22:31.137140 65191:69012 ERROR  TDRV:dma_ring_create_static_rings_for_queue_bundle_instanceFailed to allocate static tx ring for queue qActSpillReload0_3
2024-Jul-30 14:22:31.145142 65191:69012 ERROR  TDRV:drs_create_data_refill_rings            Failed to creaate static rings for queue bundle qActSpillReload0
2024-Jul-30 14:22:31.152797 65191:69012 ERROR  TDRV:kbl_model_add                           create_data_refill_rings() error
2024-Jul-30 14:22:31.162603 65191:69012 ERROR  NMGR:dlr_kelf_stage                          Failed to load subgraph
2024-Jul-30 14:22:31.168884 65191:69012 ERROR  NMGR:kmgr_load_nn_internal_v2                Failed to stage graph: kelf-0.json to NeuronCore
2024-Jul-30 14:22:31.229177 65191:69012 ERROR  NMGR:kmgr_load_nn_post_metrics               Failed to load NN: sdxl_non_inline_neff/graph.neff, err: 4
2024-Jul-30 14:22:31.236692 65191:69012 ERROR   NRT:nrt_infodump                            Neuron runtime information - please include in any support request:
2024-Jul-30 14:22:31.244396 65191:69012 ERROR   NRT:nrt_infodump                            ------------->8------------[ cut here ]------------>8-------------
2024-Jul-30 14:22:31.252148 65191:69012 ERROR   NRT:nrt_infodump                            NRT version: 2.21.41.0 (fb1705f5f26a084084cc75d6f4201472a1aa8ff1)
2024-Jul-30 14:22:31.262696 65191:69012 ERROR   NRT:nrt_infodump                            CCOM version: 2.0.0.0- (compat 41)
2024-Jul-30 14:22:31.269348 65191:69012 ERROR   NRT:nrt_infodump                            Instance ID: i-0975663b229a10fd1
2024-Jul-30 14:22:31.275927 65191:69012 ERROR   NRT:nrt_infodump                            Cluster ID: N/A
2024-Jul-30 14:22:31.281969 65191:69012 ERROR   NRT:nrt_infodump                            Kernel: Linux 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:58:04 UTC 2024
2024-Jul-30 14:22:31.291438 65191:69012 ERROR   NRT:nrt_infodump                            Nodename: ip-172-31-33-90
2024-Jul-30 14:22:31.297781 65191:69012 ERROR   NRT:nrt_infodump                            Driver version: 2.17.17.0

2024-Jul-30 14:22:31.305961 65191:69012 ERROR   NRT:nrt_infodump                            Failure: NRT_RESOURCE in nrt_load()
2024-Jul-30 14:22:31.312726 65191:69012 ERROR   NRT:nrt_infodump                            Visible cores: 0, 1
2024-Jul-30 14:22:31.318885 65191:69012 ERROR   NRT:nrt_infodump                            Environment:
2024-Jul-30 14:22:31.324830 65191:69012 ERROR   NRT:nrt_infodump                                NEURON_LIBRARY_PATH=/home/ubuntu/pyvenv/aws_neuron_venv_2.19.1/lib/python3.8/site-packages/libneuronxla/libneuronpjrt.so
2024-Jul-30 14:22:31.335636 65191:69012 ERROR   NRT:nrt_infodump                                NEURON_RT_ROOT_COMM_ID=localhost:62182
2024-Jul-30 14:22:31.342603 65191:69012 ERROR   NRT:nrt_infodump                                NEURON_FUSE_SOFTMAX=1
2024-Jul-30 14:22:31.348980 65191:69012 ERROR   NRT:nrt_infodump                                NEURON_INTERNAL_PJRT_C_API_VERSION=0.23
2024-Jul-30 14:22:31.355883 65191:69012 ERROR   NRT:nrt_infodump                            -------------8<-----------[ cut to here ]-----------8<------------
terminate called after throwing an instance of 'c10::Error'
  what():  Could not load the model status=4 message=Allocation Failure
Exception raised from NeuronModel at /opt/workspace/KaenaPyTorchRuntime/neuron_op/model.cpp:165 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fc395429617 in /home/ubuntu/pyvenv/aws_neuron_venv_2.19.1/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7fc3953e498d in /home/ubuntu/pyvenv/aws_neuron_venv_2.19.1/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #2: neuron::NeuronModel::NeuronModel(std::string const&, std::basic_string_view<char, std::char_traits<char> > const&, int, int, unsigned int, unsigned int) + 0x1b68 (0x7fc2ff7b96a8 in /home/ubuntu/pyvenv/aws_neuron_venv_2.19.1/lib/python3.8/site-packages/torch_neuronx/lib/libtorchneuron.so)
frame #3: neuron::Model::blocking_load() + 0x152 (0x7fc2ff8ac182 in /home/ubuntu/pyvenv/aws_neuron_venv_2.19.1/lib/python3.8/site-packages/torch_neuronx/lib/libtorchneuron.so)
frame #4: std::thread::_State_impl<std::thread::_Invoker<std::tuple<std::shared_ptr<neuron::NeuronModel> (neuron::Model::*)(), neuron::Model*> > >::_M_run() + 0x31 (0x7fc2ff8af341 in /home/ubuntu/pyvenv/aws_neuron_venv_2.19.1/lib/python3.8/site-packages/torch_neuronx/lib/libtorchneuron.so)
frame #5: <unknown function> + 0xd6df4 (0x7fc3d8fbadf4 in /lib/x86_64-linux-gnu/libstdc++.so.6)
frame #6: <unknown function> + 0x8609 (0x7fc46621e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7fc466358353 in /lib/x86_64-linux-gnu/libc.so.6)

For sdxl, it seems that the compilation was fine but there is an OOM error when loading to Neuron devices. Though the same checkpoint could fit into Neuron devices before...

@HuggingFaceDocBuilderDev

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Thank you!

1 similar comment
@HuggingFaceDocBuilderDev

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Thank you!

Copy link

This PR is stale because it has been open 15 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the Stale label Oct 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants