-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to properly use a queue #76
Comments
I saw streams existed as well, but this doesn't do what I expect: struct Job
{
int workData;
}
void main(string[] args) @safe
{
// number of spawned OS threads
size_t numWorkers = 16;
import concurrency.thread : stdTaskPool;
auto taskPool = stdTaskPool(numWorkers);
Job[] jobs = new Job[8000];
foreach (i, ref job; jobs)
job.workData = cast(int)i;
import concurrency.operations : then, via, on;
import concurrency.stream : arrayStream;
import concurrency.syncwait : syncWait;
jobs
.arrayStream
.collect((Job job) shared @safe => performJob(0, job))
.on(taskPool.getScheduler())
.syncWait;
}
void performJob(int runnerId, ref Job job) @trusted
{
import core.thread;
Thread.sleep(1.msecs);
} (it doesn't run them in parallel, since this takes |
There is no need to do manual queuing: #!/usr/bin/env dub
/+ dub.sdl:
name "jobs"
dependency "concurrency" version="*"
dflags "-dip1000"
+/
import concurrency;
import std.range : iota;
import std.algorithm : map;
struct Job
{
int workData;
}
void main(string[] args) @safe
{
import concurrency.thread : stdTaskPool;
import concurrency.operations : then, on, whenAll;
import std.array : array;
// number of spawned OS threads
size_t numWorkers = 16;
auto taskPool = stdTaskPool(numWorkers);
scope scheduler = taskPool.getScheduler();
iota(10000)
.map!(i => just(Job(i)).then(&performJob).on(scheduler))
.array
.whenAll
.syncWait;
}
void performJob(ref Job job) @trusted
{
import core.thread;
Thread.sleep(1.msecs);
}
|
Yes. Also, this is a multi-producer-single-consumer queue, and you were using it completely opposite (single-producer-multi-consumer). Yes, had In fact, the |
This is because you are |
ah that explains it, I was thinking what MPSCQueue would otherwise stand for and would have otherwise just casted away the shared, knowing that it will probably break because of this. (a ddoc comment would be nice here) |
I have also very roughly benchmarked overhead memory & time of concurrency here for your example: nice and constant overhead (so RAM and time growing linearly with number of jobs when queuing all at once) now it's probably also quite interesting to see if it's possible to dynamically queue all the tasks (e.g. calling |
You can switch to using a |
I'm trying to understand the API and writing an example to process a list of jobs that are known ahead of time, but I've got a few questions:
MPSCQueue.pop does not seem to be thread-safe? it doesn't have
shared
on it, but also the API neither forces me to make it shared, nor does it forbid this kind of usage. In this code, when you uncomment the version at the top, you can see that if you increase the jobs high enough / let it run long enough, it eventually dead-locks / races into a lock. (using writeln has some weird internal locks, causing a non-uniform load distribution, which doesn't seem to dead-lock)Q: What type should I rather use for a queue that I can call pop on in parallel? (use-case: one thread generates jobs, lots of worker threads distribute work for each item)
Q2: Calling
whenAll
with the same task in multiple arguments running the task multiple times feels quite foreign to me, but from the unittests I saw this usage, so I adapted it here. But is this really intended behavior?Q3: how to I properly pass the queue into my job processor? (I would expect something like just making the queue argument
shared ref
in the callback, assuming the queue would be a struct that can't be copied, but at least with the class version this doesn't compile)overarching Q: should I even be writing my code like this in the first place? Is there a better way?
The text was updated successfully, but these errors were encountered: