Skip to content
This repository has been archived by the owner on May 1, 2023. It is now read-only.

Commit

Permalink
Updated messages
Browse files Browse the repository at this point in the history
  • Loading branch information
shirayu committed Jan 6, 2023
1 parent 1eadec5 commit 472a027
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions whispering/transcriber.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ def _set_dtype(self, fp16: bool):
self.dtype = torch.float16 if fp16 else torch.float32
if self.model.device == torch.device("cpu"):
if torch.cuda.is_available():
logger.warning("Performing inference on CPU when CUDA is available")
logger.info("Performing inference on CPU though CUDA is available")
if self.dtype == torch.float16:
logger.warning("FP16 is not supported on CPU; using FP32 instead")
logger.info("Using FP32 because FP16 is not supported on CPU")
self.dtype = torch.float32

if self.dtype == torch.float32:
Expand Down

0 comments on commit 472a027

Please sign in to comment.