Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] AttributeError: module 'torch.library' has no attribute 'register_fake' #734

Open
2 tasks done
azmeer36 opened this issue Aug 20, 2024 · 6 comments
Open
2 tasks done
Labels
bug Something isn't working stale

Comments

@azmeer36
Copy link

Prerequisites

  • I have read the documentation.
  • I have checked other issues for similar problems.

Backend

Colab

Interface Used

CLI

CLI Command

No response

UI Screenshots & Parameters

No response

Error Logs

INFO | 2024-08-20 22:32:17 | autotrain.cli.autotrain:main:60 - Using AutoTrain configuration: conf.yaml
INFO | 2024-08-20 22:32:17 | autotrain.parser:post_init:147 - Running task: lm_training
INFO | 2024-08-20 22:32:17 | autotrain.parser:post_init:148 - Using backend: local
INFO | 2024-08-20 22:32:17 | autotrain.parser:run:211 - {'model': 'abhishek/llama-2-7b-hf-small-shards', 'project_name': 'my-autotrain-llm', 'data_path': '/content', 'train_split': 'train', 'valid_split': None, 'add_eos_token': True, 'block_size': 1024, 'model_max_length': 2048, 'padding': 'right', 'trainer': 'sft', 'use_flash_attention_2': False, 'log': 'tensorboard', 'disable_gradient_checkpointing': False, 'logging_steps': -1, 'eval_strategy': 'epoch', 'save_total_limit': 1, 'auto_find_batch_size': False, 'mixed_precision': 'fp16', 'lr': 0.0002, 'epochs': 1, 'batch_size': 1, 'warmup_ratio': 0.1, 'gradient_accumulation': 4, 'optimizer': 'adamw_torch', 'scheduler': 'linear', 'weight_decay': 0.01, 'max_grad_norm': 1.0, 'seed': 42, 'chat_template': None, 'quantization': 'int4', 'target_modules': 'all-linear', 'merge_adapter': False, 'peft': True, 'lora_r': 16, 'lora_alpha': 32, 'lora_dropout': 0.05, 'model_ref': None, 'dpo_beta': 0.1, 'max_prompt_length': 128, 'max_completion_length': None, 'prompt_text_column': None, 'text_column': 'text', 'rejected_text_column': None, 'push_to_hub': True, 'username': 'AzmeerFaisal', 'token': '', 'unsloth': False}
Saving the dataset (1/1 shards): 100% 9846/9846 [00:00<00:00, 228659.88 examples/s]
Saving the dataset (1/1 shards): 100% 9846/9846 [00:00<00:00, 259192.35 examples/s]
INFO | 2024-08-20 22:32:18 | autotrain.backends.local:create:8 - Starting local training...
WARNING | 2024-08-20 22:32:18 | autotrain.commands:launch_command:47 - No GPU found. Forcing training on CPU. This will be super slow!
INFO | 2024-08-20 22:32:18 | autotrain.commands:launch_command:489 - ['accelerate', 'launch', '--cpu', '-m', 'autotrain.trainers.clm', '--training_config', 'my-autotrain-llm/training_params.json']
INFO | 2024-08-20 22:32:18 | autotrain.commands:launch_command:490 - {'model': 'abhishek/llama-2-7b-hf-small-shards', 'project_name': 'my-autotrain-llm', 'data_path': 'my-autotrain-llm/autotrain-data', 'train_split': 'train', 'valid_split': None, 'add_eos_token': True, 'block_size': 1024, 'model_max_length': 2048, 'padding': 'right', 'trainer': 'sft', 'use_flash_attention_2': False, 'log': 'tensorboard', 'disable_gradient_checkpointing': False, 'logging_steps': -1, 'eval_strategy': 'epoch', 'save_total_limit': 1, 'auto_find_batch_size': False, 'mixed_precision': 'fp16', 'lr': 0.0002, 'epochs': 1, 'batch_size': 1, 'warmup_ratio': 0.1, 'gradient_accumulation': 4, 'optimizer': 'adamw_torch', 'scheduler': 'linear', 'weight_decay': 0.01, 'max_grad_norm': 1.0, 'seed': 42, 'chat_template': None, 'quantization': 'int4', 'target_modules': 'all-linear', 'merge_adapter': False, 'peft': True, 'lora_r': 16, 'lora_alpha': 32, 'lora_dropout': 0.05, 'model_ref': None, 'dpo_beta': 0.1, 'max_prompt_length': 128, 'max_completion_length': None, 'prompt_text_column': 'autotrain_prompt', 'text_column': 'autotrain_text', 'rejected_text_column': 'autotrain_rejected_text', 'push_to_hub': True, 'username': 'AzmeerFaisal', 'token': '
', 'unsloth': False}
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 5, in
from accelerate.commands.accelerate_cli import main
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 19, in
from accelerate.commands.estimate import estimate_command_parser
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/estimate.py", line 34, in
import timm
File "/usr/local/lib/python3.10/dist-packages/timm/init.py", line 2, in
from .layers import is_scriptable, is_exportable, set_scriptable, set_exportable
File "/usr/local/lib/python3.10/dist-packages/timm/layers/init.py", line 8, in
from .classifier import ClassifierHead, create_classifier, NormMlpClassifierHead
File "/usr/local/lib/python3.10/dist-packages/timm/layers/classifier.py", line 15, in
from .create_norm import get_norm_layer
File "/usr/local/lib/python3.10/dist-packages/timm/layers/create_norm.py", line 14, in
from torchvision.ops.misc import FrozenBatchNorm2d
File "/usr/local/lib/python3.10/dist-packages/torchvision/init.py", line 10, in
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip
File "/usr/local/lib/python3.10/dist-packages/torchvision/_meta_registrations.py", line 163, in
@torch.library.register_fake("torchvision::nms")
AttributeError: module 'torch.library' has no attribute 'register_fake'
INFO | 2024-08-20 22:32:22 | autotrain.parser:run:216 - Job ID: 20950

Additional Information

No response

@azmeer36 azmeer36 added the bug Something isn't working label Aug 20, 2024
@abhishekkrthakur
Copy link
Member

whats your torch version?

@azmeer36
Copy link
Author

whats your torch version?

Torch version: 2.2.0+cu121

This is my first cell code:
import os
!pip install -U autotrain-advanced > install_logs.txt 2>&1
!pip install -U torch torchvision
!autotrain setup --update-torch
!autotrain setup --colab > setup_logs.txt
from autotrain import version
print(f'AutoTrain version: {version}')
from torch import version
print(f'Torch version: {version}')

@abhishekkrthakur
Copy link
Member

please use pytorch==2.3.0

@azmeer36
Copy link
Author

azmeer36 commented Aug 21, 2024

I have tried running this:

import os
!pip install -U autotrain-advanced > install_logs.txt 2>&1
!pip install torch==2.3.0 torchvision
!autotrain setup --colab > setup_logs.txt
from autotrain import version
print(f'AutoTrain version: {version}')
from torch import version
print(f'Torch version: {version}')

It still says:
Torch version: 2.2.0+cu121

@ngwgsang
Copy link

@azmeer36
try installing torch==2.4.0 , it worked in my case.

import os
!pip install -U autotrain-advanced > install_logs.txt 2>&1
!autotrain setup --colab > setup_logs.txt
!pip install torch==2.4.0
from autotrain import __version__
import torch

Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Oct 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale
Projects
None yet
Development

No branches or pull requests

3 participants