-
Notifications
You must be signed in to change notification settings - Fork 612
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VM fails to start on M1 CPU when CPU cores set to more than the host has #1621 #2309
base: master
Are you sure you want to change the base?
Conversation
how do i run the linter? |
Signed-off-by: notpranavunsw <[email protected]> ran linter Signed-off-by: notpranavunsw <[email protected]>
@@ -108,6 +108,10 @@ func Validate(y LimaYAML, warn bool) error { | |||
return errors.New("field `cpus` must be set") | |||
} | |||
|
|||
if *y.CPUs > runtime.NumCPU() { | |||
return fmt.Errorf("field `cpus` is set to %d, which is greater than the number of CPUs available (%d)", *y.CPUs, runtime.NumCPU()) | |||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if the VM driver should automatically decrease the CPUs (with a warning)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could (i have no strong opinion either way), but why do it in the driver and not in FillDefault
? We set the default memory in there based on actual memory available, so why not implement the cap on the number of CPUs there as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would be fine as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just remembered: we may be executing FillDefault
several times (we read the configurations of all instances during startup to look for things like shared network daemons, or shared additional disks). So any warnings generated while processing lima.yaml
will be displayed multiple times, and we may be displaying warnings for other instances too.
So doing it in the driver might be preferable after all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I have a question.
Am I correct in thinking this should be implemented in vz_driver_darwin.go
and qemu_driver.go
?
Thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm able to generate a warning, but I'm not sure how to edit the yaml config to permanently set the number of cpu's.
setting y.cpus
doesn't edit the yaml when viewed using limactl edit
No description provided.