Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: This model does not support image input. #6375

Closed
1 task done
anantgupta129 opened this issue Dec 18, 2024 · 3 comments
Closed
1 task done

ValueError: This model does not support image input. #6375

anantgupta129 opened this issue Dec 18, 2024 · 3 comments
Labels
solved This problem has been already solved

Comments

@anantgupta129
Copy link

Reminder

  • I have read the README and searched the existing issues.

System Info

  • llamafactory version: 0.9.2.dev0
  • Platform: Linux-6.1.85+-x86_64-with-glibc2.35
  • Python version: 3.10.12
  • PyTorch version: 2.3.1+cu121 (GPU)
  • Transformers version: 4.46.1
  • Datasets version: 3.1.0
  • Accelerate version: 1.0.1
  • PEFT version: 0.12.0
  • TRL version: 0.9.6
  • GPU type: Tesla T4
  • Bitsandbytes version: 0.45.0

Reproduction

models: unsloth/Llama-3.2-11B-Vision-Instruct, meta-llama/Llama-3.2-11B-Vision-Instruct

The colab example provided is also failing

ValueError: This model does not support image input.
Traceback (most recent call last):
  File "/usr/local/bin/llamafactory-cli", line 8, in <module>
    sys.exit(main())
  File "/content/LLaMA-Factory/src/llamafactory/cli.py", line 112, in main
    run_exp()
  File "/content/LLaMA-Factory/src/llamafactory/train/tuner.py", line 50, in run_exp
    run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
  File "/content/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 101, in run_sft
    train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2122, in train
    return inner_training_loop(
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2426, in _inner_training_loop
    batch_samples, num_items_in_batch = self.get_batch_samples(epoch_iterator, num_batches)
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 5038, in get_batch_samples
    batch_samples += [next(epoch_iterator)]
  File "/usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py", line 550, in __iter__
    current_batch = next(dataloader_iter)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 631, in __next__
    data = self._next_data()
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 675, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
    return self.collate_fn(data)
  File "/content/LLaMA-Factory/src/llamafactory/data/collator.py", line 163, in __call__
    features = super().__call__(features)
  File "/content/LLaMA-Factory/src/llamafactory/data/collator.py", line 107, in __call__
    fake_messages = self.template.mm_plugin.process_messages(fake_messages, fake_images, [], self.processor)
  File "/content/LLaMA-Factory/src/llamafactory/data/mm_plugin.py", line 210, in process_messages
    self._validate_input(images, videos)
  File "/content/LLaMA-Factory/src/llamafactory/data/mm_plugin.py", line 76, in _validate_input
    raise ValueError("This model does not support image input.")
ValueError: This model does not support image input.

My Custom dataset does not have images also i tries using dataset in colab example still same issue dataset="identity,alpaca_en_demo",

chat template mllama is throwing same error

Expected behavior

No response

Others

No response

@github-actions github-actions bot added the pending This problem is yet to be addressed label Dec 18, 2024
@hiyouga
Copy link
Owner

hiyouga commented Dec 18, 2024

please use correct template #4614

@hiyouga hiyouga closed this as completed Dec 18, 2024
@hiyouga hiyouga added solved This problem has been already solved and removed pending This problem is yet to be addressed labels Dec 18, 2024
@anantgupta129
Copy link
Author

please use correct template #4614

@hiyouga this issue is in both templates llama3 & mllama for the above models

@anantgupta129
Copy link
Author

@hiyouga v0.9.1 is working with existing code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

2 participants