std.parallelism.task_pool
- multiple declarations
- Function taskPool
- Class TaskPool
Function taskPool
Returns a lazily initialized global instantiation of
.
This function can safely be called concurrently from multiple non-worker
threads. The worker threads in this pool are daemon threads, meaning that it
is not necessary to call TaskPool
or TaskPool.stop
before
terminating the main thread.
TaskPool.finish
Prototype
TaskPool taskPool() @property @trusted;
Class TaskPool
This class encapsulates a task
queue and a set of worker threads. Its purpose
is to efficiently map
a large number of
s onto a smaller number of
threads. A Task
task
queue is a FIFO queue of
objects that have been
submitted to the Task
and are awaiting execution. A worker thread is a
thread that executes the TaskPool
at the front of the queue when one is
available and sleeps when the queue is empty.
Task
This class should usually be used via the global instantiation
available via the std.parallelism.taskPool
property.
Occasionally it is useful to explicitly instantiate a
:
TaskPool
1. When you want
instances with multiple priorities, for example
a low TaskPool
priority
pool and a high priority
pool.
2. When the threads in the global task
pool are waiting on a synchronization
primitive (for example a mutex), and you want to parallelize the code that
needs to run before these threads can be resumed.
Inherits from
-
(base class)Object
Constructors
Name | Description |
---|---|
this
|
Default constructor that initializes a with
- 1 worker threads. The minus 1 is included because the
main thread will also be available to do work.
|
this
|
Allows for custom number of worker threads. |
Properties
Name | Type | Description |
---|---|---|
isDaemon
[get, set]
|
bool |
These properties control whether the worker threads are daemon threads. A daemon thread is automatically terminated when all non-daemon threads have terminated. A non-daemon thread will prevent a program from terminating as long as it has not terminated. |
priority
[get, set]
|
int |
These functions allow getting and setting the OS scheduling priority of
the worker threads in this . They forward to
, so a given priority value here means the
same thing as an identical priority value in .
|
size
[get]
|
ulong |
Returns the number of worker threads in the pool. |
workerIndex
[get]
|
ulong |
Gets the index of the current thread relative to this . Any
thread not in this pool will receive an index of 0. The worker threads in
this pool receive unique indices of 1 through this.size .
|
Methods
Name | Description |
---|---|
asyncBuf
|
Given a range that is expensive to iterate over, returns an
input range that asynchronously buffers the contents of
into a buffer of elements in a worker thread,
while making previously buffered elements from a second buffer, also of size
, available via the range interface of the returned
object . The returned range has a length iff hasLength!S .
is useful, for example, when performing expensive operations
on the elements of ranges that represent data on a disk or network.
|
asyncBuf
|
Given a callable object that writes to a user-provided buffer and
a second callable object that determines whether more data is
available to write via , returns an input range that
asynchronously calls with a set of size of buffers
and makes the results available in the order they were obtained via the
input range interface of the returned object . Similarly to the
input range overload of , the first half of the buffers
are made available via the range interface while the second half are
filled and vice-versa.
|
finish
|
Signals worker threads to terminate when the queue becomes empty. |
parallel
|
Implements a parallel foreach loop over a range . This works by implicitly
creating and submitting one to the for each worker
thread. A work unit is a set of consecutive elements of to
be processed by a worker thread between communication with any other
thread. The number of elements processed per work unit is controlled by the
parameter. Smaller work units provide better load
balancing, but larger work units avoid the overhead of communicating
with other threads frequently to fetch the next work unit. Large work
units also avoid false sharing in cases where the range is being modified.
The less time a single iteration of the loop takes, the larger
should be. For very expensive loop bodies,
should be 1. An overload that chooses a default work
unit size is also available.
|
put
|
Put a object on the back of the task queue. The
object may be passed by pointer or reference.
|
stop
|
Signals to all worker threads to terminate as soon as they are finished
with their current , or immediately if they are not executing a
. s that were in queue will not be executed unless
a call to , or
causes them to be executed.
|
workerLocalStorage
|
Creates an instance of worker-local storage, initialized with a given
value. The value is lazy so that you can, for example, easily
create one instance of a class for each worker. For usage example,
see the struct.
|
factory
|
Create instance of class specified by the fully qualified name
classname .
The class must either have no constructors or have
a default constructor.
|
opCmp
|
Compare with another Object obj.
|
opEquals
|
Returns !=0 if this object does have the same contents as obj.
|
toHash
|
Compute hash function for Object .
|
toString
|
Convert Object to a human readable string.
|
Inner structs
Name | Description |
---|---|
WorkerLocalStorage
|
Struct for creating worker-local storage. Worker-local storage is
thread-local storage that exists only for worker threads in a given
plus a single thread outside the pool. It is allocated on the
garbage collected heap in a way that avoids false sharing, and doesn't
necessarily have global scope within any thread. It can be accessed from
any worker thread in the that created it, and one thread
outside this . All threads outside the pool that created a
given instance of worker-local storage share a single slot.
|
WorkerLocalStorageRange
|
Range primitives for worker-local storage. The purpose of this is to
access results produced by each worker thread from a single thread once you
are no longer using the worker-local storage from multiple threads.
Do not use this struct in the parallel portion of your algorithm.
|
Templates
Name | Description |
---|---|
amap
|
|
map
|
|
reduce
|