Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name = "ClusterManagers"
uuid = "34f1f09b-3a8b-5176-ab39-66d58a4d544e"
version = "1.1.0"
version = "2.0.0"

[deps]
Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b"
Expand Down
29 changes: 1 addition & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,34 +72,7 @@ spread across CPU sockets. Default is `BALANCED`.

### Using `ElasticManager` (dynamically adding workers to a cluster)

The `ElasticManager` is useful in scenarios where we want to dynamically add workers to a cluster.
It achieves this by listening on a known port on the master. The launched workers connect to this
port and publish their own host/port information for other workers to connect to.

On the master, you need to instantiate an instance of `ElasticManager`. The constructors defined are:

```julia
ElasticManager(;addr=IPv4("127.0.0.1"), port=9009, cookie=nothing, topology=:all_to_all, printing_kwargs=())
ElasticManager(port) = ElasticManager(;port=port)
ElasticManager(addr, port) = ElasticManager(;addr=addr, port=port)
ElasticManager(addr, port, cookie) = ElasticManager(;addr=addr, port=port, cookie=cookie)
```

You can set `addr=:auto` to automatically use the host's private IP address on the local network, which will allow other workers on this network to connect. You can also use `port=0` to let the OS choose a random free port for you (some systems may not support this). Once created, printing the `ElasticManager` object prints the command which you can run on workers to connect them to the master, e.g.:

```julia
julia> em = ElasticManager(addr=:auto, port=0)
ElasticManager:
Active workers : []
Number of workers to be added : 0
Terminated workers : []
Worker connect command :
/home/user/bin/julia --project=/home/user/myproject/Project.toml -e 'using ClusterManagers; ClusterManagers.elastic_worker("4cOSyaYpgSl6BC0C","127.0.1.1",36275)'
```

By default, the printed command uses the absolute path to the current Julia executable and activates the same project as the current session. You can change either of these defaults by passing `printing_kwargs=(absolute_exename=false, same_project=false))` to the first form of the `ElasticManager` constructor.

Once workers are connected, you can print the `em` object again to see them added to the list of active workers.
For `ElasticManager`, please see the [ElasticClusterManager.jl](https://github.com/JuliaParallel/ElasticClusterManager.jl) package.

### Sun Grid Engine (SGE)

Expand Down
1 change: 0 additions & 1 deletion src/ClusterManagers.jl
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,5 @@ include("scyld.jl")
include("condor.jl")
include("slurm.jl")
include("affinity.jl")
include("elastic.jl")

end
156 changes: 0 additions & 156 deletions src/elastic.jl

This file was deleted.

25 changes: 0 additions & 25 deletions test/elastic.jl

This file was deleted.

4 changes: 0 additions & 4 deletions test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,6 @@ using Distributed: workers, nworkers
using Distributed: procs, nprocs
using Distributed: remotecall_fetch, @spawnat
using Test: @testset, @test, @test_skip
# ElasticManager:
using ClusterManagers: ElasticManager
# Slurm:
using ClusterManagers: addprocs_slurm, SlurmManager
# SGE:
Expand All @@ -24,8 +22,6 @@ slurm_is_installed() = !isnothing(Sys.which("sbatch"))
qsub_is_installed() = !isnothing(Sys.which("qsub"))

@testset "ClusterManagers.jl" begin
include("elastic.jl")

if slurm_is_installed()
@info "Running the Slurm tests..." Sys.which("sbatch")
include("slurm.jl")
Expand Down
Loading