Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to assign cpu cores and memory limits to a job when running via docker as backend #102

Open
LittlePawer opened this issue Nov 29, 2022 · 2 comments

Comments

@LittlePawer
Copy link

Dear experts,

I was wondering if there is a way to assign manually the cpu cores and memory limits to the job running via docker as the backend so that I can configure the resources to my jobs better?

Many thanks!

@lukasheinrich
Copy link
Contributor

Hi @LittlePawer

I think adding something like this in your steps.yml should work also with the recast backend, right?

  stages:
    - name: reana_demo_helloworld_memory_limit
      dependencies: [init]
      scheduler:
        scheduler_type: 'singlestep-stage'
        parameters:
          helloworld: {step: init, output: helloworld}
        step:
          process:
            process_type: 'string-interpolated-cmd'
            cmd: 'python "{helloworld}"'
          environment:
            environment_type: 'docker-encapsulated'
            image: 'python'
            imagetag: '2.7-slim'
            resources:
              - compute_backend: kubernetes
              - kubernetes_memory_limit: '8Gi'

@LittlePawer
Copy link
Author

Hi @lukasheinrich ,

Thanks for the reply. but is that helps when running via reana? Or it will also help running locally with the backend docker?
And how to configure the CPU resources then?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants