We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
L40S的GPU,显存48G,内存64G,使用comfyUI运行CogVideoX-Fun-V1.5-5B-InP 图生视频, 报错 !!! Exception during processing !!! Allocation on device
Traceback (most recent call last):
File "/app/ComfyUI/execution.py", line 324, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/app/ComfyUI/execution.py", line 199, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/app/ComfyUI/execution.py", line 170, in _map_node_over_list
process_inputs(input_dict, i)
File "/app/ComfyUI/execution.py", line 159, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/comfyui/comfyui_nodes.py", line 334, in process
sample = pipeline(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/pipeline/pipeline_cogvideox_inpaint.py", line 1139, in call
video = self.decode_latents(latents)
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/pipeline/pipeline_cogvideox_inpaint.py", line 595, in decode_latents
frames = self.vae.decode(latents).sample
File "/usr/local/lib/python3.10/dist-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/models/autoencoder_magvit.py", line 1410, in decode
decoded = self._decode(z).sample
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/models/autoencoder_magvit.py", line 1379, in _decode
z_intermediate, conv_cache = self.decoder(z_intermediate, conv_cache=conv_cache)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/models/autoencoder_magvit.py", line 1053, in forward
hidden_states, new_conv_cache[conv_cache_key] = up_block(
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/models/autoencoder_magvit.py", line 730, in forward
hidden_states, new_conv_cache[conv_cache_key] = resnet(
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/models/autoencoder_magvit.py", line 374, in forward
hidden_states, new_conv_cache["norm1"] = self.norm1(hidden_states, zq, conv_cache=conv_cache.get("norm1"))
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/models/autoencoder_magvit.py", line 195, in forward
new_f = norm_f * conv_y + conv_b
torch.OutOfMemoryError: Allocation on device
Got an OOM, unloading all loaded models.
Prompt executed in 196.79 seconds
The text was updated successfully, but these errors were encountered:
No branches or pull requests
L40S的GPU,显存48G,内存64G,使用comfyUI运行CogVideoX-Fun-V1.5-5B-InP 图生视频,
报错
!!! Exception during processing !!! Allocation on device
Traceback (most recent call last):
File "/app/ComfyUI/execution.py", line 324, in execute
File "/app/ComfyUI/execution.py", line 199, in get_output_data
File "/app/ComfyUI/execution.py", line 170, in _map_node_over_list
File "/app/ComfyUI/execution.py", line 159, in process_inputs
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/comfyui/comfyui_nodes.py", line 334, in process
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/pipeline/pipeline_cogvideox_inpaint.py", line 1139, in call
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/pipeline/pipeline_cogvideox_inpaint.py", line 595, in decode_latents
File "/usr/local/lib/python3.10/dist-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/models/autoencoder_magvit.py", line 1410, in decode
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/models/autoencoder_magvit.py", line 1379, in _decode
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/models/autoencoder_magvit.py", line 1053, in forward
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/models/autoencoder_magvit.py", line 730, in forward
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/models/autoencoder_magvit.py", line 374, in forward
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
File "/app/ComfyUI/custom_nodes/CogVideoX-Fun/cogvideox/models/autoencoder_magvit.py", line 195, in forward
torch.OutOfMemoryError: Allocation on device
Got an OOM, unloading all loaded models.
Prompt executed in 196.79 seconds
The text was updated successfully, but these errors were encountered: