Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: could not create a primitive #5270

Open
Kashouryo opened this issue Oct 17, 2024 · 4 comments
Open

RuntimeError: could not create a primitive #5270

Kashouryo opened this issue Oct 17, 2024 · 4 comments

Comments

@Kashouryo
Copy link

ComfyUI Error Report

Error Details

  • Node Type: VAEEncode
  • Exception Type: RuntimeError
  • Exception Message: could not create a primitive

Stack Trace

  File "/home/shouryo/Software/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/home/shouryo/Software/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/nodes.py", line 310, in encode
    t = vae.encode(pixels[:,:,:,:3])
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/comfy/sd.py", line 355, in encode
    samples[x:x+batch_number] = self.first_stage_model.encode(pixels_in).to(self.output_device).float()
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/comfy/ldm/models/autoencoder.py", line 179, in encode
    z = self.encoder(x)
        ^^^^^^^^^^^^^^^

  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 531, in forward
    h = self.mid.attn_1(h)
        ^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 287, in forward
    h_ = self.optimized_attention(q, k, v)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 235, in pytorch_attention
    out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

System Information

  • ComfyUI Version: v0.2.3-13-g7390ff3
  • Arguments: main.py --lowvram
  • OS: posix
  • Python Version: 3.11.9 | Intel Corporation | (main, Aug 12 2024, 23:58:22) [GCC 14.1.0]
  • Embedded Python: false
  • PyTorch Version: 2.3.1+cxx11.abi

Devices

  • Name: xpu
    • Type: xpu
    • VRAM Total: 16225243136
    • VRAM Free: 15916800512
    • Torch VRAM Total: 2602565632
    • Torch VRAM Free: 2294123008

Logs

2024-10-17 15:35:53,321 - root - INFO - Total VRAM 15474 MB, total RAM 128731 MB
2024-10-17 15:35:53,321 - root - INFO - pytorch version: 2.3.1+cxx11.abi
2024-10-17 15:35:53,321 - root - INFO - Set vram state to: LOW_VRAM
2024-10-17 15:35:53,331 - root - INFO - Device: xpu
2024-10-17 15:35:53,336 - root - INFO - Using pytorch cross attention
2024-10-17 15:35:53,656 - root - INFO - [Prompt Server] web root: /home/shouryo/Software/ComfyUI/web
2024-10-17 15:35:53,865 - root - INFO - Total VRAM 15474 MB, total RAM 128731 MB
2024-10-17 15:35:53,865 - root - INFO - pytorch version: 2.3.1+cxx11.abi
2024-10-17 15:35:53,866 - root - INFO - Set vram state to: LOW_VRAM
2024-10-17 15:35:53,866 - root - INFO - Device: xpu
2024-10-17 15:35:54,811 - root - INFO - 
Import times for custom nodes:
2024-10-17 15:35:54,811 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/websocket_image_save.py
2024-10-17 15:35:54,811 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-Inpaint-CropAndStitch
2024-10-17 15:35:54,811 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-OpenPose-Editor
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-WD14-Tagger
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI_essentials
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-KJNodes
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI_tinyterraNodes
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-Manager
2024-10-17 15:35:54,812 - root - INFO -    0.1 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI_FizzNodes
2024-10-17 15:35:54,812 - root - INFO -    0.4 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-Impact-Pack
2024-10-17 15:35:54,812 - root - INFO -    0.4 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI_Custom_Nodes_AlekPet
2024-10-17 15:35:54,812 - root - INFO - 
2024-10-17 15:35:54,818 - root - INFO - Starting server

2024-10-17 15:35:54,818 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-10-17 15:40:27,249 - root - INFO - got prompt
2024-10-17 15:40:28,811 - root - INFO - model weight dtype torch.float16, manual cast: None
2024-10-17 15:40:28,812 - root - INFO - model_type EPS
2024-10-17 15:40:34,409 - root - INFO - Using pytorch attention in VAE
2024-10-17 15:40:34,410 - root - INFO - Using pytorch attention in VAE
2024-10-17 15:40:35,846 - root - INFO - Requested to load SDXLClipModel
2024-10-17 15:40:35,846 - root - INFO - Loading 1 new model
2024-10-17 15:40:35,853 - root - INFO - loaded completely 0.0 1560.802734375 True
2024-10-17 15:40:38,966 - root - INFO - Requested to load AutoencoderKL
2024-10-17 15:40:38,967 - root - INFO - Loading 1 new model
2024-10-17 15:40:39,034 - root - INFO - loaded completely 0.0 159.55708122253418 True
2024-10-17 15:40:40,408 - root - ERROR - !!! Exception during processing !!! could not create a primitive
2024-10-17 15:40:40,409 - root - ERROR - Traceback (most recent call last):
  File "/home/shouryo/Software/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/home/shouryo/Software/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/nodes.py", line 310, in encode
    t = vae.encode(pixels[:,:,:,:3])
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/sd.py", line 355, in encode
    samples[x:x+batch_number] = self.first_stage_model.encode(pixels_in).to(self.output_device).float()
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/models/autoencoder.py", line 179, in encode
    z = self.encoder(x)
        ^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 531, in forward
    h = self.mid.attn_1(h)
        ^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 287, in forward
    h_ = self.optimized_attention(q, k, v)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 235, in pytorch_attention
    out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: could not create a primitive

2024-10-17 15:40:40,410 - root - INFO - Prompt executed in 13.16 seconds
2024-10-17 15:40:43,815 - root - INFO - got prompt
2024-10-17 15:40:45,013 - root - INFO - model weight dtype torch.float16, manual cast: None
2024-10-17 15:40:45,013 - root - INFO - model_type EPS
2024-10-17 15:40:48,509 - root - INFO - Using pytorch attention in VAE
2024-10-17 15:40:48,509 - root - INFO - Using pytorch attention in VAE
2024-10-17 15:40:49,006 - root - INFO - Requested to load SD1ClipModel
2024-10-17 15:40:49,006 - root - INFO - Loading 1 new model
2024-10-17 15:40:49,008 - root - INFO - loaded completely 0.0 235.84423828125 True
2024-10-17 15:40:49,657 - root - INFO - Requested to load AutoencoderKL
2024-10-17 15:40:49,657 - root - INFO - Loading 1 new model
2024-10-17 15:40:49,727 - root - INFO - loaded completely 0.0 159.55708122253418 True
2024-10-17 15:40:49,802 - root - ERROR - !!! Exception during processing !!! could not create a primitive
2024-10-17 15:40:49,802 - root - ERROR - Traceback (most recent call last):
  File "/home/shouryo/Software/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/home/shouryo/Software/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/nodes.py", line 310, in encode
    t = vae.encode(pixels[:,:,:,:3])
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/sd.py", line 355, in encode
    samples[x:x+batch_number] = self.first_stage_model.encode(pixels_in).to(self.output_device).float()
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/models/autoencoder.py", line 179, in encode
    z = self.encoder(x)
        ^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 531, in forward
    h = self.mid.attn_1(h)
        ^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 287, in forward
    h_ = self.optimized_attention(q, k, v)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 235, in pytorch_attention
    out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: could not create a primitive

2024-10-17 15:40:49,803 - root - INFO - Prompt executed in 5.99 seconds
2024-10-17 15:40:51,502 - root - INFO - got prompt
2024-10-17 15:40:51,526 - root - ERROR - !!! Exception during processing !!! could not create a primitive
2024-10-17 15:40:51,526 - root - ERROR - Traceback (most recent call last):
  File "/home/shouryo/Software/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/home/shouryo/Software/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/nodes.py", line 310, in encode
    t = vae.encode(pixels[:,:,:,:3])
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/sd.py", line 355, in encode
    samples[x:x+batch_number] = self.first_stage_model.encode(pixels_in).to(self.output_device).float()
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/models/autoencoder.py", line 179, in encode
    z = self.encoder(x)
        ^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 531, in forward
    h = self.mid.attn_1(h)
        ^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 287, in forward
    h_ = self.optimized_attention(q, k, v)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 235, in pytorch_attention
    out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: could not create a primitive

2024-10-17 15:40:51,527 - root - INFO - Prompt executed in 0.02 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":20,"last_link_id":27,"nodes":[{"id":8,"type":"VAEDecode","pos":{"0":1209,"1":188},"size":{"0":210,"1":46},"flags":{},"order":8,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":7},{"name":"vae","type":"VAE","link":17}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[9],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":12,"type":"VAEEncode","pos":{"0":614.97998046875,"1":707.6800537109375},"size":{"0":210,"1":46},"flags":{},"order":4,"mode":0,"inputs":[{"name":"pixels","type":"IMAGE","link":27},{"name":"vae","type":"VAE","link":16}],"outputs":[{"name":"LATENT","type":"LATENT","links":[11],"slot_index":0}],"properties":{"Node name for S&R":"VAEEncode"},"widgets_values":[]},{"id":7,"type":"CLIPTextEncode","pos":{"0":413,"1":389},"size":{"0":425.27801513671875,"1":180.6060791015625},"flags":{},"order":6,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":22}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["watermark, text, logo",true]},{"id":20,"type":"ImageResizeKJ","pos":{"0":161,"1":689},"size":{"0":315,"1":266},"flags":{},"order":2,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":25},{"name":"get_image_size","type":"IMAGE","link":null,"shape":7},{"name":"width_input","type":"INT","link":null,"widget":{"name":"width_input"}},{"name":"height_input","type":"INT","link":null,"widget":{"name":"height_input"}}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[27],"slot_index":0,"shape":3},{"name":"width","type":"INT","links":null,"shape":3},{"name":"height","type":"INT","links":null,"shape":3}],"properties":{"Node name for S&R":"ImageResizeKJ"},"widgets_values":[1000,1560,"bilinear",false,2,0,0,"disabled"]},{"id":19,"type":"LoraLoader","pos":{"0":81,"1":219},"size":{"0":315,"1":126},"flags":{},"order":3,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":23},{"name":"clip","type":"CLIP","link":20}],"outputs":[{"name":"MODEL","type":"MODEL","links":[24],"slot_index":0,"shape":3},{"name":"CLIP","type":"CLIP","links":[21,22],"slot_index":1,"shape":3}],"properties":{"Node name for S&R":"LoraLoader"},"widgets_values":["佐倉おりこSDXL_LoHA.safetensors",1,1]},{"id":3,"type":"KSampler","pos":{"0":853,"1":182},"size":{"0":315,"1":474},"flags":{},"order":7,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":24},{"name":"positive","type":"CONDITIONING","link":4},{"name":"negative","type":"CONDITIONING","link":6},{"name":"latent_image","type":"LATENT","link":11}],"outputs":[{"name":"LATENT","type":"LATENT","links":[7],"slot_index":0}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[506667112414263,"randomize",20,8,"dpmpp_2m","karras",0.6]},{"id":9,"type":"SaveImage","pos":{"0":524,"1":928},"size":{"0":442.30426025390625,"1":528.2882080078125},"flags":{},"order":9,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9}],"outputs":[],"properties":{},"widgets_values":["ComfyUI"]},{"id":18,"type":"LoadImage","pos":{"0":-181,"1":912},"size":{"0":512.4854125976562,"1":524.7637329101562},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[25],"slot_index":0,"shape":3},{"name":"MASK","type":"MASK","links":null,"shape":3}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["image-5-1024x683.png","image"]},{"id":6,"type":"CLIPTextEncode","pos":{"0":415,"1":186},"size":{"0":422.84503173828125,"1":164.31304931640625},"flags":{},"order":5,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":21}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[4],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["white text on red background",true]},{"id":14,"type":"CheckpointLoaderSimple","pos":{"0":-259,"1":327},"size":{"0":315,"1":98},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[23],"slot_index":0,"shape":3},{"name":"CLIP","type":"CLIP","links":[20],"slot_index":1,"shape":3},{"name":"VAE","type":"VAE","links":[16,17],"slot_index":2,"shape":3}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["sd-v1-5-inpainting.safetensors"]}],"links":[[4,6,0,3,1,"CONDITIONING"],[6,7,0,3,2,"CONDITIONING"],[7,3,0,8,0,"LATENT"],[9,8,0,9,0,"IMAGE"],[11,12,0,3,3,"LATENT"],[16,14,2,12,1,"VAE"],[17,14,2,8,1,"VAE"],[20,14,1,19,1,"CLIP"],[21,19,1,6,0,"CLIP"],[22,19,1,7,0,"CLIP"],[23,14,0,19,0,"MODEL"],[24,19,0,3,0,"MODEL"],[25,18,0,20,0,"IMAGE"],[27,20,0,12,0,"IMAGE"]],"groups":[{"title":"Loading images","bounding":[150,630,726,171],"color":"#3f789e","font_size":24,"flags":{}}],"config":{},"extra":{"ds":{"scale":0.9090909090909091,"offset":[423.38601410120583,-117.7985724350568]}},"version":0.4}

Additional Context

Ubuntu (Linux Mint 22.0)
Intel Arc A770 16GB ASRock Challenger

@LukeG89
Copy link

LukeG89 commented Oct 17, 2024

I just tested your workflow and it works to me (but I have a completely different setup).

Have you tried upgrading ComfyUI and all your custom nodes (especially KJNodes)?
Maybe your KJNodes is outdated and the VAEEncode error is due to Resize Image node. You can try bypassing it and see if the error is gone.

I also see that you used a SDXL LoHA with a SD1.5 model, they won't work together.

@Kashouryo
Copy link
Author

It also happens with regular KSampler. I will disable all my nodes and see how it goes

@Kashouryo
Copy link
Author

Yup, after disabling all my nodes, it still happens. Note that I am using Intel Arc A770 with IPEX under Linux Mint 22

@LovelyA72
Copy link

LovelyA72 commented Oct 18, 2024

I also got this issue on my dual-Arc computer. Further information: it's originated from OneDNN's error code -6, which is out of memory.

There's a previous discussion here: oneapi-src/oneDNN#914

It happened after an apt upgrade which I believe it might be an intel side issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants