赞
踩
SM1 in ES1 has one associated private pool, PM11, and SM2 in ES2 has two private pools, PM21 and PM22.
PS is shared between ES1 and ES2, and thus both SM1 in ES1 and SM2 in ES2 can access the pool to push or pop work units.
PE denotes an event pool. The event pool is meant for lightweight notification. It is periodically checked by a scheduler to handle the arrival of events (e.g., messages from the network).
S1 and S2 in PM11 are stacked schedulers that will be executed by the main scheduler SM1.
ES
maps to one OS thread
, is explicitly created by the user, and executes independently of other ESs.stack region
, whereas a tasklet borrows the stack of its host ES’s schedulerinserted into a specific pool in a ready state
. Thus, they will be scheduled by the scheduler associated with the target pool and executed in the ES associated with the scheduler. If the pool is shared with more than one scheduler and the schedulers run in different ESs, the work units may be scheduled in any of the ESs.
yields control to a specific ULT
instead of the scheduler. Yield to is cheaper than yield because it bypasses the scheduler and eliminates the overhead of one context switch. Yield to can be used only among ULTs associated with the same ES
.Mutex, condition variable, future, and barrier operations
are supported, but only for ULTs
.ES
is mapped to a Pthread
and can be bound to a hardware processing element (e.g., CPU core or hardware thread).Context switching
between ULTs can be achieved through various methods, such as ucontext
, setjmp/longjmp with sigaltstack [21], or Boost library’s fcontext
[22].user context
includes CPU registers, a stack pointer, and an instruction pointer.ULT context that contains a user context, a stack, the information for the function
that the ULT will execute, and its argument
. a function pointer, argument, and some bookkeeping information
, such as an associated pool or ES. Tasklets are executed on the scheduler’s stack space
.dynamically provisioned
, compute-node-funded services
[41], in situ analysis and coupling services
[42], or distributed access to on-node storage devices
programmability (i.e., ensuring that the service itself is easy to debug and maintain), performance for concurrent workloads, and minimal interference with colocated applications.
delegate blocking system calls to a separate Argobots pool
as shown in Figure 12. The calling ULT is suspended while the I/O operation is in progress
, thereby allowing other service threads to make progress until the I/O operation completes.I/O resource provides a native asynchronous API
(such as the Mercury RPC library [44]), then one need not delegate operations to a dedicated pool; the resource can use its normal completion notification mechanism to signal eventuals.
epoll() system call to block
, and the pool can write() to an eventfd() file descriptor to notify it when new work units are added.
libev [45] event loop
and asynchronous event watchers
to abstract this functionality for greater portabilityCopyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。