name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
261998 | Practical Algorithms for Selection on Coarse-Grained Parallel Computers. | AbstractIn this paper, we consider the problem of selection on coarse-grained distributed memory parallel computers. We discuss several deterministic and randomized algorithms for parallel selection. We also consider several algorithms for load balancing needed to keep a balanced distribution of data across processors during the execution of the selection algorithms. We have carried out detailed implementations of all the algorithms discussed on the CM-5 and report on the experimental results. The results clearly demonstrate the role of randomization in reducing communication overhead. | better in practice than its deterministic counterpart due to the low constant associated with the
algorithm.
Parallel selection algorithms are useful in such practical applications as dynamic distribution
of multidimensional data sets, parallel graph partitioning and parallel construction of multidimensional
binary search trees. Many parallel algorithms for selection have been designed for the PRAM
model [2, 3, 4, 9, 14] and for various network models including trees, meshes, hypercubes and re-configurable
architectures [6, 7, 13, 16, 22]. More recently, Bader et.al. [5] implement a parallel
deterministic selection algorithm on several distributed memory machines including CM-5, IBM
SP-2 and INTEL Paragon. In this paper, we consider and evaluate parallel selection algorithms
for coarse-grained distributed memory parallel computers. A coarse-grained parallel computer consists
of several relatively powerful processors connected by an interconnection network. Most of
the commercially available parallel computers belong to this category. Examples of such machines
include CM-5, IBM SP-1 and SP-2, nCUBE 2, INTEL Paragon and Cray T3D.
The rest of the paper is organized as follows: In Section 2, we describe our model of parallel
computation and outline some primitives used by the algorithms. In Section 3, we present two deterministic
and two randomized algorithms for parallel selection. Selection algorithms are iterative
and work by reducing the number of elements to consider from iteration to iteration. Since we can
not guarantee that the same number of elements are removed on every processor, this leads to load
imbalance. In Section 4, we present several algorithms to perform such a load balancing. Each of
the load balancing algorithms can be used by any selection algorithm that requires load balancing.
In Section 5, we report and analyze the results we have obtained on the CM-5 by detailed implementation
of the selection and load balancing algorithms presented. In section 6, we analyze the
selection algorithms for meshes and hypercubes. Section 7 discusses parallel weighted selection.
We conclude the paper in Section 8.
Preliminaries
2.1 Model of Parallel Computation
We model a coarse-grained parallel machine as follows: A coarse-grained machine consists of several
relatively powerful processors connected by an interconnection network. Rather than making specific
assumptions about the underlying network, we assume a two-level model of computation. The
two-level model assumes a fixed cost for an off-processor access independent of the distance between
the communicating processors. Communication between processors has a start-up overhead of - ,
while the data transfer rate is 1
- . For our complexity analysis we assume that - and - are constant
and independent of the link congestion and distance between two processors. With new techniques,
such as wormhole routing and randomized routing, the distance between communicating processors
seems to be less of a determining factor on the amount of time needed to complete the communica-
tion. Furthermore, the effect of link contention is eased due to the presence of virtual channels and
the fact that link bandwidth is much higher than the bandwidth of node interface. This permits us
to use the two-level model and view the underlying interconnection network as a virtual crossbar
network connecting the processors. These assumptions closely model the behavior of the CM-5 on
which our experimental results are presented. A discussion on other architectures is presented in
Section 6.
2.2 Parallel Primitives
In the following, we describe some important parallel primitives that are repeatedly used in our
algorithms and implementations. We state the running time required for each of these primitives
under our model of parallel computation. The analysis of the run times for the primitives described
is fairly simple and is omitted in the interest of brevity. The interested reader is referred to [15].
In what follows, p refers to the number of processors.
1. Broadcast
In a Broadcast operation, one processor has an element of data to be broadcast to all other
processors. This operation can be performed in O((-) log p) time.
2. Combine
Given an element of data on each processor and a binary associative and commutative op-
eration, the Combine operation computes the result of combining the elements stored on all
the processors using the operation and stores the result on every processor. This operation
can also be performed in O((-) log p) time.
3. Parallel Prefix
Suppose that x are p data elements with processor P i containing x i .
Let\Omega be a
binary associative operation. The Parallel Prefix operation stores the value of x
on processor P i . This operation can be be performed in O((-) log p) time.
4. Gather
Given an element of data on each processor, the Gather operation collects all the data and
stores it in one of the processors. This can be accomplished in O(- log
5. Global Concatenate
This is same as Gather except that the collected data should be stored on all the processors.
This operation can also be performed in O(- log
6. Transportation Primitive
The transportation primitive performs many-to-many personalized communication with possibly
high variance in message size. If the total length of the messages being sent out or
received at any processor is bounded by t, the time taken for the communication is 2-t (+
lower order terms) when t - O(p p-). If the outgoing and incoming traffic bounds are
r and c instead, the communication takes time c) (+ lower order terms) when either
3 Parallel Algorithms for Selection
Parallel algorithms for selection are also iterative and work by reducing the number of elements
to be considered from iteration to iteration. The elements are distributed across processors and
each iteration is performed in parallel by all the processors. Let n be the number of elements
and p be the number of processors. To begin with, each processor is given d n
Otherwise, this can be easily achieved by using one of the load balancing techniques to be described
in Section 4. Let n (j)
i be the number of elements in processor P i at the beginning of iteration j.
Algorithm 1 Median of Medians selection algorithm
Total number of elements
Total number of processors labeled from 0 to
List of elements on processor P i , where jL
desired rank among the total elements
On each processor P i
Step 1. Use sequential selection to find median m i of list L i [l; r]
2.
Step 3. On P0
Find median of M , say MoM , and broadcast it to all processors.
Step 4. Partition L i into - MoM and ? MoM to give index i , the split index
Step 5. count = Combine(index i , add) calculates the number of elements
Step 6. If (rank - count )
else
Step 7. LoadBalance(L
Step 8.
Step 9. On P0
Perform sequential selection to find element q of rank in L
Figure
1: Median on Medians selection Algorithm
. Let k (j) be the rank of the element we need to identify among these n (j)
elements. We use this notation to describe all the selection algorithms presented in this paper.
3.1 Median of Medians Algorithm
The median of medians algorithm is a straightforward parallelization of the deterministic sequential
algorithm [8] and has recently been suggested and implemented by Bader et. al. [5]. This algorithm
Figure
load balancing at the beginning of each iteration.
At the beginning of iteration j, each processor finds the median of its n (j)
elements using the sequential deterministic algorithm. All such medians are gathered on one pro-
cessor, which then finds the median of these medians. The median of medians is then estimated
to be the median of all the n (j) elements. The estimated median is broadcast to all the processors.
Each processor scans through its set of points and splits them into two subsets containing elements
less than or equal to and greater than the estimated median, respectively. A Combine operation
and a comparison with k (j) determines which of these two subsets to be discarded and the value of
k (j+1) needed for the next iteration.
Selecting the median of medians as the estimated median ensures that the estimated median
will have at least a guaranteed fraction of the number of elements below it and at least a guaranteed
fraction of the elements above it, just as in the sequential algorithm. This ensures that the worst
case number of iterations required by the algorithm is O(log n). Let n (j)
. Thus,
finding the local median and splitting the set of points into two subsets based on the estimated
median each requires O(n (j)
in the j th iteration. The remaining work is one Gather, one
Broadcast and one Combine operation. Therefore, the worst-case running time of this algorithm is
log
p ), the running time is O( n
log n+ - log p log n+
-p log n).
This algorithm requires the use of load balancing between iterations. With load balancing,
. Assuming load balancing and ignoring the cost of load balancing itself, the running
time of the algorithm reduces to P log
3.2 Bucket-Based Algorithm
The bucket-based algorithm [17] attempts to reduce the worst-case running time of the above algorithm
without requiring load balance. This algorithm is shown in Figure 2. First, in order to keep
the algorithm deterministic without a balanced number of elements on each processor, the median
of medians is replaced by the weighted median of medians. As before, local medians are computed
on each processor. However, the estimated median is taken to be the weighted median of the local
medians, with each median weighted by the number of elements on the corresponding processor.
This will again guarantee that a fixed fraction of the elements is dropped from consideration every
iteration. The number of iterations of the algorithm remains O(log n).
The dominant computational work in the median of medians algorithm is the computation of
the local median and scanning through the local elements to split them into two sets based on the
estimated median. In order to reduce this work which is repeated every iteration, the bucket-based
approach preprocesses the local data into O(log p) buckets such that for any 0 -
every element in bucket i is smaller than any element in bucket j. This can be accomplished by
finding the median of the local elements, splitting them into two buckets based on this median
and recursively splitting each of these buckets into log pbuckets using the same procedure. Thus,
preprocessing the local data into O(log p) buckets requires O( n
log log p) time.
Bucketing the data simplifies the task of finding the local median and the task of splitting the
local data into two sets based on the estimated median. To find the local median, identify the
bucket containing the median and find the rank of the median in the bucket containing the median
Algorithm 2 Bucket-based selection algorithm
Total number of elements
Total number of processors labeled from 0 to
List of elements on processor P i , where jL
desired rank among the total elements
On each processor P i
Step 0. Partition L i on P i into log p buckets of equal size such that if r 2 bucket j , and s 2 bucketk , then r ! s if
whilen ? C (a constant)
Step 1. Find the bucket bktk containing the median element using a binary search on the remaining
buckets. This is followed by finding the appropriate rank in bktk to find the median m i . Let N i be the
number of remaining keys on P i .
2.
Step 3. On P0
Find the weighted median of M , say WM and broadcast it.
Step 4. Partition L i into - WM and ? WM using the buckets to give index i ; the split index
Step 5. count = Combine(index i , add) calculates the number of elements less than WM
Step 6. If (rank - count )
else
Step 7.
Step 8. On P0
Perform sequential selection to find element q of rank in L
Figure
2: Bucket-based selection algorithm
in O(log log p) time using binary search. The local median can be located in the bucket by the
sequential selection algorithm in O( n
time. The cost of finding the local median reduces from
O( n
log p ). To split the local data into two sets based on the estimated median,
first identify the bucket that should contain the estimated median. Only the elements in this bucket
need to be split. Thus, this operation also requires only O(log log
log p
time.
After preprocessing, the worst-case run time for selection is O(log log p log
log p log
log p log n+ -p log n) = O( n
log p
log log log p. Therefore, the
worst-case run time of the bucket-based approach is O( n
log
without any load balancing.
Algorithm 3 Randomized selection algorithm
Total number of elements
Total number of processors labeled from 0 to
List of elements on processor P i , where jL
desired rank among the total elements
On each processor P i
whilen ? C (a constant)
Step
Step 1.
Step 2. Generate a random number nr (same on all processors) between 0 and
Step 3. On Pk where (nr
Step 4. Partition L i into - mguess and ? mguess to give index i , the split index
Step 5. count = Combine(index i , add) calculates the number of elements less than mguess
Step 6. If (rank - count )
else
Step 7.
Step 8. On P0
Perform sequential selection to find element q of rank in L
Figure
3: Randomized selection algorithm
3.3 Randomized Selection Algorithm
The randomized median finding algorithm (Figure 3) is a straightforward parallelization of the
randomized sequential algorithm described in [12]. All processors use the same random number
generator with the same seed so that they can produce identical random numbers. Consider the
behavior of the algorithm in iteration j. First, a parallel prefix operation is performed on the
's. All processors generate a random number between 1 and n (j) to pick an element at random,
which is taken to be the estimate median. From the parallel prefix operation, each processor can
determine if it has the estimated median and if so broadcasts it. Each processor scans through
its set of points and splits them into two subsets containing elements less than or equal to and
greater than the estimated median, respectively. A Combine operation and a comparison with k (j)
determines which of these two subsets to be discarded and the value of k (j+1) needed for the next
iteration.
Since in each iteration approximately half the remaining points are discarded, the expected
number of iterations is O(log n) [12]. Let n (j)
. Thus, splitting the set of points into
two subsets based on the median requires O(n (j)
in the j th iteration. The remaining work
is one Parallel Prefix, one Broadcast and one Combine operation. Therefore, the total expected
running time of the algorithm is P log
p ), the expected running time is O( n
log n). In practice,
one can expect that n (j)
max reduces from iteration to iteration, perhaps by half. This is especially
true if the data is randomly distributed to the processors, eliminating any order present in the
input. In fact, by a load balancing operation at the end of every iteration, we can ensure that for
every iteration j, n (j)
. With load balancing and ignoring the cost of it, the running
time of the algorithm reduces to P log
log n). Even
without this load balancing, assuming that the initial data is randomly distributed, the running
time is expected to be O( n
log n).
3.4 Fast Randomized Selection Algorithm
The expected number of iterations required for the randomized median finding algorithm is O(log n).
In this section we discuss an approach due to Rajasekharan et. al. [17] that requires only
O(log log n) iterations for convergence with high probability (Figure 4).
Suppose we want to find the k th smallest element among a given set of n elements. Sample a set
S of o(n) keys at random and sort S. The element with rank
e in S will have an expected
rank of k in the set of all points. Identify two keys l 1 and l 2 in S with ranks
ffi is a small integer such that with high probability the rank of l 1 is ! k and the rank of l 2 is ? k in
the given set of points. With this, all the elements that are either ! l 1 or ? l 2 can be eliminated.
Recursively find the element with rank in the remaining elements. If the number
of elements is sufficiently small, they can be directly sorted to find the required element.
If the ranks of l 1 and l 2 are both ! k or both ? k, the iteration is repeated with a different
sample set. We make the following modification that may help improve the running time of the
algorithm in practice. Suppose that the ranks of l 1 and l 2 are both ! k. Instead of repeating the
iteration to find element of rank k among the n elements, we discard all the elements less than l 2
and find the element of rank in the remaining elements. If the ranks of
l 1 and l 2 are both ? k, elements greater than l 1 can be discarded.
Rajasekharan et. al. show that the expected number of iterations of this median finding
algorithm is O(log log n) and that the expected number of points decreases geometrically after each
iteration. If n (j) is the number of points at the start of the j th iteration, only a sample of o(n (j) )
keys is sorted. Thus, the cost of sorting, o(n (j) log n (j) ) is dominated by the O(n (j) ) work involved
in scanning the points.
Algorithm 4 Fast randomized selection algorithm
Total number of elements
Total number of processors labeled from 0 to
List of elements on processor P i , where jL
desired rank among the total elements
On each processor P i
whilen ? C (a constant)
Step
Step 1. Collect a sample S i from L i [l; r] by picking n i n ffl
n elements at random on P i between l and r.
Step 2.
On P0
Step 3. Pick k1 , k2 from S with ranks d ijSj
jSjlogne and d ijSj
jSjlogne
Step 4. Broadcast k1 and k2.The rank to be found will be in [k1 , k2 ] with high probability.
Step 5. Partition L i between l and r into ! k1 , [k1 , k2 ] and ? k2 to give counts less, middle
and high and splitters s1 and s2 .
Step 6.
Step 7. cless = Combine(less, add)
Step 8. If (rank 2 (cless ; cmid ])
else
else
Step 9.
Step 10. On P0
Perform sequential selection to find element q of rank in L
Figure
4: Fast Randomized selection Algorithm
In iteration j, Processor P (j)
randomly selects n (j)
n (j) of its n (j)
elements. The selected elements
are sorted using a parallel sorting algorithm. Once sorted, the processors containing the elements
l (j)and l (j)broadcast them. Each processor finds the number of elements less than l (j)and greater
than l (j)
contained by it. Using Combine operations, the ranks of l (j)
1 and l (j)
are computed and
the appropriate action of discarding elements is undertaken by each processor. A large value of
ffl increases the overhead due to sorting. A small value of ffl increases the probability that both
the selected elements (l (j)
1 and l (j)
lie on one side of the element with rank k (j) , thus causing an
unsuccessful iteration. By experimentation, we found a value of 0:6 to be appropriate.
As in the randomized median finding algorithm, one iteration of the median finding algorithm
takes O(n (j)
log log n iterations are required, median
finding requires O( n
log log n + (-) log p log log n) time.
As before, we can do load balancing to ensure that n (j)
reduces by half in every iteration.
Assuming this and ignoring the cost of load balancing, the running time of median finding reduces
to
log log
log log n). Even without this load balancing,
the running time is expected to be O( n
log log n).
4 Algorithms for load balancing
In order to ensure that the computational load on each processor is approximately the same during
every iteration of a selection algorithm, we need to dynamically redistribute the data such that
every processor has nearly equal number of elements. We present three algorithms for performing
such a load balancing. The algorithms can also be used in other problems that require dynamic
redistribution of data and where there is no restriction on the assignment of data to processors.
We use the following notation to describe the algorithms for load balancing: Initially, processor
is the total number of elements on all the processors, i.e.
4.1 Order Maintaining Load Balance
Suppose that each processor has its set of elements stored in an array. We can view the n elements
as if they were globally sorted based on processor and array indices. For any i ! j, any element
in processor P i appears earlier in this sorted order than any element in processor P j . The order
maintaining load balance algorithm is a parallel prefix based algorithm that preserves this global
order of data after load balancing.
The algorithm first performs a Parallel Prefix operation to find the position of the elements it
contains in the global order. The objective is to redistribute data such that processor P i contains
Algorithm 5 Modified order maintaining load balance
- Number of total elements
Total number of processors labeled from 0 to
List of elements on processor P i of size n i
On each processor P i
Step
increment navg
Step 1.
Step 2. diff
Step 3. If diff [j] is positive P j is labeled as a source. If diff [j] is negative P j is labeled as
a sink.
Step 4. If P i is a source calculate the prefix sum of the positive diff [ ] in an array p src, else calculate
the prefix sums for sinks using negative diff [ ] in p snk.
Step 5. l
Step 7. Calculate the range of destination processors [P l ; Pr ] using a binary search on p snk.
Step 8. while(l - r)
elements to P l and increment l
Step 5. l
Step 7. Calculate the range of source processors [P l ; Pr ] using a binary search on
src.
Step 8. while( l - r)
Receive elements from P l and increment l
Figure
5: Modified order maintaining load balance
the elements with positions n avg in the global order. Using the parallel prefix
operation, each processor can figure out the processors to which it should send data and the amount
of data to send to each processor. Similarly, each processor can figure out the amount of data it
should receive, if any, from each processor. Communication is generated according to this and the
data is redistributed.
In our model of computation, the running time of this algorithm only depends on the maximum
communication generated/received by a processor. The maximum number of messages sent out by
a processor is d nmax
navg e+1 and the maximum number of elements sent is n max . The maximum number
of elements received by a processor is n avg . Therefore, the running time is O(- nmax
The order maintaining load balance algorithm may generate much more communication than
necessary. For example, consider the case where all processors have n avg elements except that P 0
has one element less and P p\Gamma1 has one element more than n avg . The optimal strategy is to transfer
the one extra element from P p to P 0 . However, this algorithm transfers one element from P i to
messages.
Since preserving the order of data is not important for the selection algorithm, the following
modification is done to the algorithm: Every processor retains minfn of its original elements.
the processor has (n excess and is labeled a source. Otherwise,
the processor needs (n avg \Gamma n i ) elements and is labeled a sink. The excessive elements in the source
processors and the number of elements needed by the sink processors are ranked separately using
two Parallel Prefix operations. The data is transferred from sources to sinks using a strategy similar
to the order maintaining load balance algorithm. This algorithm (Figure 5), which we call modified
order maintaining load balance algorithm (modified OMLB), is implemented in [5].
The maximum number of messages sent out by a processor in modified OMLB is O(p) and the
maximum number of elements sent is (n The maximum number of elements received
by a processor is n avg . The worst-case running time is O(-p
4.2 Dimension Exchange Method
The dimension exchange method (Figure 6) is a load balancing technique originally proposed for
hypercubes [11][21]. In each iteration of this method, processors are paired to balance the load
locally among themselves which eventually leads to global load balance. The algorithm runs in
log p iterations. In iteration i (0 processors that differ in the i th least significant
bit position of their id's exchange and balance the load. After iteration i, for any 0 -
processors have the same number of elements.
In each iteration, ppairs of processors communicate in parallel. No processor communicates
more than nmaxelements in an iteration. Therefore, the running time is O(- log
However, since 2 j processors hold the maximum number of elements in iteration j, it is likely that
either n max is small or far fewer elements than nmaxare communicated. Therefore, the running
time in practice is expected to be much better than what is predicated by the worst-case.
4.3 Global Exchange
This algorithm is similar to the modified order maintaining load balance algorithm except that
processors with large amounts of data are directly paired with processor with small amounts of
data to minimize the number of messages (Figure 7).
As in the modified order maintaining load balance algorithm, every processor retains minfn
of its original elements. If the processor has (n excess and is la-
Algorithm 6 Dimension exchange method
- Number of total elements
Total number of processors labeled from 0 to
List of elements on processor P i of size n i
On each processor P i
Step 1. P
Step 2. Exchange the count of elements between P
Step 3.
Step 4. Send elements from L i [navg ] to processor P l
Step 5. n
else
Step 4. Receive n l \Gamma navg elements from processor P l at
Step 5. Increment n i by n l \Gamma navg
Figure
exchange method for load balancing
beled a source. Otherwise, the processor needs (n avg \Gamma n i ) elements and is labeled a sink. All
the source processors are sorted in non-increasing order of the number of excess elements each
processor holds. Similarly, all the sink processors are sorted in non-increasing order of the number
of elements each processor needs. The information on the number of excessive elements in each
source processor is collected using a Global Concatenate operation. Each processor locally ranks
the excessive elements using a prefix operation according to the order of the processors obtained by
the sorting. Another Global Concatenate operation collects the number of elements needed by each
sink processor. These elements are then ranked locally by each processor using a prefix operation
performed using the ordering of the sink processors obtained by sorting.
Using the results of the prefix operation, each source processor can find the sink processors to
which its excessive elements should be sent and the number of element that should be sent to each
such processor. The sink processors can similarly compute information on the number of elements
to be received from each source processor. The data is transferred from sources to sinks. Since the
sources containing large number of excessive elements send data to sinks requiring large number of
elements, this may reduce the total number of messages sent.
In the worst-case, there may be only one processor containing all the excessive elements and thus
the total number of messages sent out by the algorithm is O(p). No processor will send more than
data and the maximum number of elements received by any processor is
n avg . The worst-case run time is O(-p
Algorithm 7 Global Exchange load balance
- Number of total elements
Total number of processors labeled from 0 to
List of elements on processor P i of size n i
On each processor P i
Step
increment navg
Step 1.
for j /0 to
Step 2. diff
Step 3. If diff [j] is positive P j is labeled as a source. If diff [j] is negative P j is labeled as
a sink.
Step 4. For k 2 [0; sources in descending order maintaining appropriate
processor indices. Also sort diff [k] for sinks in ascending order.
Step 5. If P i is a source calculate the prefix sum of the positive diff [ ] in an array p src, else calculate
the prefix sums for sinks using negative diff [ ] in p snk.
Step 6. If P i is a source calculate the prefix sum of the positive diff [ ] in an array p src, else calculate
the prefix sums for sinks using negative diff [ ] in p snk.
Step 7. l
8. r
Step 9. Calculate the range of destination processors [P l ; Pr ] using a binary search on p snk.
Step 10. while(l - r)
elements to P l and increment l
Step 7. l
8. r
Step 9. Calculate the range of source processors [P l ; Pr ] using a binary search on
src.
Step 10. while( l - r)
Receive elements from P l and increment l
Figure
7: Global exchange method for load balancing
Selection Algorithm Run-time
Median of Medians O( n
Randomized O( n
log n)
Fast randomized O( n
log log n)
Table
1: The running times of various selection algorithm assuming but not including the cost of
load balancing
Selection Algorithm Run-time
Median of Medians O( n
Bucket-based O( n
log
Randomized O( n
log log n)
Fast randomized O( n
log log n + (-) log p log log n)
Table
2: The worst-case running times of various selection algorithms
5 Implementation Results
The estimated running times of various selection algorithms are summarized in Table 1 and Table 2.
Table
1 shows the estimated running times assuming that each processor contains approximately
the same number of elements at the end of each iteration of the selection algorithm. This can be
expected to hold for random data even without performing any load balancing and we also observe
this experimentally. Table 2 shows the worst-case running times in the absence of load balancing.
We have implemented all the selection algorithms and the load balancing techniques on the CM-
5. To experimentally evaluate the algorithms, we have chosen the problem of finding the median of a
given set of numbers. We ran each selection algorithm without any load balancing and with each of
the load balancing algorithms described (except for the bucket-based approach which does not use
load balancing). We have run all the resulting algorithms on 32k, 64k, 128k, 256k, 512k, 1024k and
2048k numbers using 2, 4, 8, 16, 32, 64 and 128 processors. The algorithms are run until the total
number of elements falls below p 2 , at which point the elements are gathered on one processor and
the problem is solved by sequential selection. We found this to be appropriate by experimentation,
to avoid the overhead of communication when each processor contains only a small number of
elements. For each value of the total number of elements, we have run each of the algorithms on
two types of inputs - random and sorted. In the random case, n
p elements are randomly generated on
each processor. To eliminate peculiar cases while using the random data, we ran each experiment on
five different random sets of data and used the average running time. Random data sets constitute
close to the best case input for the selection algorithms. In the sorted case, the n numbers are
chosen to be the numbers containing the numbers i n
The sorted input is a close to the worst-case input for the selection algorithms. For example, after
the first iteration of a selection algorithm using this input, approximately half of the processors lose
all their data while the other half retains all of their data. Without load balancing, the number of
active processors is cut down by about half every iteration. The same is true even if modified order
maintaining load balance and global exchange load balancing algorithms are used. After every
iteration, about half the processors contain zero elements leading to severe load imbalance for the
load balancing algorithm to rectify. Only some of the data we have collected is illustrated in order
to save space.
The execution times of the four different selection algorithms without using load balancing for
random data (except for median of medians algorithm requiring load balancing for which global
exchange is used) with 128k, 512k and 2048k numbers is shown in Figure 8. The graphs clearly
demonstrate that all four selection algorithms scale well with the number of processors. An immediate
observation is that the randomized algorithms are superior to the deterministic algorithms by
an order of magnitude. For example, with the median of medians algorithm ran
at least 16 times slower and the bucket-based selection algorithm ran at least 9 times slower than
either of the randomized algorithms. Such an order of magnitude difference is uniformly observed
even using any of the load balancing techniques and also in the case of sorted data. This is not
surprising since the constants involved in the deterministic algorithms are higher due to recursively
finding the estimated median. Among the deterministic algorithms, the bucket-based approach
consistently performed better than the median of medians approach by about a factor of two for
random data. For sorted data, the bucket-based approach which does not use any load balancing
ran only about 25% slower than median of medians approach with load balancing.
In each iteration of the parallel selection algorithm, each processor also performs a local selection
algorithm. Thus the algorithm can be split into a parallel part where the processors combine the
results of their local selections and a sequential part involving executing the sequential selection
locally on each processor. In order to convince ourselves that randomized algorithms are superior
in either part, we ran the following hybrid experiment. We ran both the deterministic parallel
selection algorithms replacing the sequential selection parts by randomized sequential selection. The
running time of the hybrid algorithms was in between the deterministic and randomized parallel
selection algorithms. We made the following observation: The factor of improvement in randomized
parallel selection algorithms over deterministic parallel selection is due to improvements in both the
sequential and parallel parts. For large n, much of the improvement is due to the sequential part.
For large p, the improvement is due to the parallel part. We conclude that randomized algorithms
are faster in practice and drop the deterministic algorithms from further consideration.
Time
(in
seconds)
Number of Processors
Median of Medians
Bucket Based
Randomized
Fast Randomized0.0150.0250.0350.0450.0550.065
Time
(in
seconds)
Number of Processors
Randomized
Fast Randomized0.51.52.53.5
Time
(in
seconds)
Number of Processors
Median of Medians
Bucket Based
Randomized
Fast Randomized0.040.080.120.160.20.24
Time
(in
seconds)
Number of Processors
Randomized
Fast Randomized261014
Time
(in
seconds)
Number of Processors
Median of Medians
Bucket Based
Randomized
Fast Randomized0.10.30.50.7
Time
(in
seconds)
Number of Processors
Randomized
Fast Randomized
Figure
8: Performance of different selection algorithms without load balancing (except for median
of medians selection algorithm for which global exchange is used) on random data sets.
Time
(in
seconds)
Number of Processors
Random data, n=512k
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global Exchange0.10.30.50.7
Time
(in
seconds)
Number of Processors
Random data, n=2M
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global Exchange0.050.150.250.350.45
Time
(in
seconds)
Number of Processors
Sorted data, n=512k
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global Exchange0.20.611.41.8
Time
(in
seconds)
Number of Processors
Sorted data, n=2m
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global Exchange
Figure
9: Performance of randomized selection algorithm with different load balancing strategies
on random and sorted data sets.
To facilitate an easier comparison of the two randomized algorithms, we show their performance
separately in Figure 8. Fast randomized selection is asymptotically superior to randomized selection
for worst-case data. For random data, the expected running times of randomized and fast
randomized algorithms are O( n
log n) and O( n
log log n), respectively.
Consider the effect of increasing n for a fixed p. Initially, the difference in log n and log log n is not
significant enough to offset the overhead due to sorting in fast randomized selection and randomized
selection performs better. As n is increased, fast randomized selection begins to outperform
randomized selection. For large n, both the algorithms converge to the same execution time since
the O( n
dominates. Reversing this point view, we find that for any fixed n, as
we increase p, randomized selection will eventually perform better and this can be readily observed
in the graphs.
The effect of the various load balancing techniques on the randomized algorithms for random
data is shown in Figure 9 and Figure 10. The execution times are consistently better without
using any load balancing than using any of the three load balancing techniques. Load balancing
Time
(in
seconds)
Number of Processors
Random data, n=512k
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global Exchange0.10.30.50.7
Time
(in
seconds)
Number of Processors
Random data, n=2m
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global
Time
(in
seconds)
Number of Processors
Sorted data, n=512k
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global
Time
(in
seconds)
Number of Processors
Sorted data, n=2m
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global Exchange
Figure
10: Performance of fast randomized selection algorithm with different load balancing strategies
on random and sorted data sets.
for random data almost always had a negative effect on the total execution time and this effect
is more pronounced in randomized selection than in fast randomized selection. This is explained
by the fact that fast randomized selection has fewer iterations (O(log log n) vs. O(log n)) and less
data in each iteration.
The observation that load balancing has a negative effect on the running time for random
data can be easily explained: In load balancing, a processor with more elements sends some of
its elements to another processor. The time taken to send the data is justified only if the time
taken to process this data in future iterations is more than the time for sending it. Suppose that
a processor sends m elements to another processor. The processing of this data involves scanning
it in each iteration based on an estimated median and discarding part of the data. For random
data, it is expected that half the data is discarded in every iteration. Thus, the estimated total
time to process this data is O(m). The time for sending the data is (-m), which is also O(m).
By observation, the constants involved are such that load balancing is taking more time than the
reduction in running time caused by it.
Time
(in
seconds)
Number of Processors
Comparing two randomized selection algorithms using sorted data for n=512k
Randomized
Fast Randomized0.20.611.41.8
Time
(in
seconds)
Number of Processors
Comparing two randomized selection algorithm using sorted datas for n=2M
Randomized
Fast Randomized
Figure
11: Performance of the two randomized selection algorithms on sorted data sets using the
best load balancing strategies for each algorithm \Gamma no load balancing for randomized selection and
modified order maintaining load balancing for fast randomized selection.
Consider the effect of the various load balancing techniques on the randomized algorithms for
sorted data (see Figure 9 and Figure 10). Even in this case, the cost of load balancing more than
offset the benefit of it for randomized selection. However, load balancing significantly improved the
performance of fast randomized selection.
In
Figure
11, we see a comparison of the two randomized algorithms for sorted data with the
best load balancing strategies for each algorithm \Gamma no load balancing for randomized selection and
modified order maintaining load balancing for fast randomized algorithm (which performed slightly
better than other strategies). We see that, for large n, fast randomized selection is superior. We
also observe (see Figure 11 and Figure 8) that the fast randomized selection has better comparative
advantage over randomized selection for sorted data.
Finally, we consider the time spent in load balancing itself for the randomized algorithms on
both random and sorted data (see Figure 12 and Figure 13). For both types of data inputs, fast
randomized selection spends much less time than randomized selection in balancing the load. This
is reflective of the number of times the load balancing algorithms are utilized (O(log log n) vs.
O(log n)). Clearly, the cost of load balancing increases with the amount of imbalance and the
number of processors. For random data, the overhead due to load balancing is quite tolerable
for the range of n and p used in our experiments. For sorted data, a significant fraction of the
execution time of randomized selection is spent in load balancing. Load balancing never improved
the running time of randomized selection. Fast randomized selection benefited from load balancing
for sorted data. The choice of the load balancing algorithm did not make a significant difference in
the running time.
Consider the variance in the running times between random and sorted data for both the
Number of Processors0.10.30.50.7Time
(in
seconds)
Randomized selection , random data
load balancing time
O
Number of Processors0.20.61.01.41.8
Time
(in
seconds)
Randomized selection , sorted data
load balancing time
O
Figure
12: Performance of randomized selection algorithm with different load balancing strategies
balancing (N), Order maintaining load balancing (O), Dimension exchange method (D)
and Global exchange (G).
Number of Processors0.10.30.5Time
(in
seconds)
Fast Randomized selection , random data
load balancing time
O
Number of Processors0.20.61.0Time
(in
seconds)
Fast Randomized selection , sorted data
load balancing time
OD
G
Figure
13: Performance of fast randomized selection algorithm with different load balancing strategies
balancing (N), Order maintaining load balancing (O), Dimension exchange method
(D) and Global exchange (G).
Primitive Two-level model Hypercube Mesh
Broadcast O((-) log p) O((-) log p) O(- log
Combine O((-) log p) O((-) log p) O(- log
Parallel Prefix O((-) log p) O((-) log p) O(- log
Gather O(- log
Global Concatenate O(- log
Transportation O(-p
Table
3: Running time for basic communication primitives on meshes and hypercubes using cut-through
routing. For the transportation primitive, t refers to the maximum of the total size of
messages sent out or received by any processor.
randomized algorithms. The randomized selection algorithm ran 2 to 2.5 times faster for random
data than for sorted data (see Figure 12). Using any of the load balancing strategies, there is very
little variance in the running time of fast randomized selection (Figure 13). The algorithm performs
equally well on both best and worst-case data. For the case of 128 processors the stopping criterion
results in execution of one iteration in most runs. Thus, load balancing has a detrimental
effect on the overall cost. We had decided to choose the same stopping criterion to provide a fair
comparison between the different algorithms. However, an appropriate fine tuning of this stopping
criterion and corresponding increase in the number of iterations should provide time improvements
with load balancing for 2M data size on 128 processors.
6 Selection on Meshes and Hypercubes
Consider the analysis of the algorithms presented for cut-through routed hypercubes and square
meshes with p processors. The running time of the various algorithms on meshes and hypercubes is
easily obtained by substituting the corresponding running times for the basic parallel communication
primitives used by the algorithms. Table 3 shows the time required for each parallel primitive
on the two-level model of computation, a hypercube of p processors and a p p \Theta
mesh. The analysis is omitted to save space and similar analysis can be found in [15].
Load balancing can be achieved by using the communication pattern of the transportation
primitive [24] which involves two all-to-all personalized communications. Each processor has O( n
elements to be sent out. The worst-case time of order maintaining load balance is O(-p
for the hypercube and mesh respectively when n ? O(-p
exchange load balancing algorithm on the hypercube has worst-case run time of O(- log p+- n
log p)
and on the mesh it is O(- log
The global exchange load balancing algorithmm has the
same time complexities as the modified order maintaining load balancing algorithm, both on the
hypercube and the mesh. These costs must be added to the selection algorithms if analysis of the
algorithms with load balancing is desired.
From the table, the running times of all the primitives remain the same on a hypercube and
hence the analysis and the experimental results obtained for the two-level model will be valid for
hypercubes. Thus, the time complexity of all the selection algorithms is the same on the hypercube
as the two-level model discussed in this paper. If the ratio of unit computation cost to the unit
communication cost is large, i.e the processor is much faster than the underlying communication
network, cost of load balancing will offset its advantages and fast randomized algorithm without
load balancing will have superior performance for practical scenarios.
Load balancing on the mesh results in asymptotically worse time requirements. We would
expect load balancing to be useful for small number of processors. For large number of processors,
even one step of load balancing would dominate the overall time and hence would not be effective.
In the following we present results for the performance for best case and worst case data on a mesh.
1. Deterministic Algorithms: The communication primitives used in the deterministic selection
algorithms are Gather, Broadcast and Combine. Even though Broadcast and Combine require
more time than the two-level model, their cost is absorbed by the time required for the Gather
operation which is identical on the mesh and the two-level model. Hence, the complexity of
the deterministic algorithms on the mesh remains the same as the two-level model. The total
time requirements for the median of medians algorithm are O( n
log n) for
the best case and O( n
log n) for the worst case. The bucket-based
deterministic algorithm runs in to O( n
log time in the
worst case without load balancing.
2. Randomized Algorithms: The communication for the the randomized algorithm includes one
PrefixSum, one Broadcast and one Combine. The communication time on a mesh for one
iteration of the randomized algorithm is O(- log p+- p
p), making its overall time complexity
O( n
log n) for the best case and O( n
log log n) for
the worst case data.
The fast randomized algorithm involves a parallel sort of the sample for which we use bitonic
sort. A sample of n ffl (0 chosen from the n elements and sorted. On the
mesh, sorting a sample of n ffl elements using bitonic sort takes O(
log
should be acceptably small to keep the sorting phase from dominating every iteration.
The run-time of the fast randomized selection on the mesh is O( n
for the best case data. For the worst case data the time requirement would be O( n
log log p+
7 Weighted selection
having a corresponding weight w i attached to
it. The problem of weighted selection is to find an element x i such that
any x l
As an example, the weighted median
will be the element that divides the data set S with sum of weights W , into two sets S 1 and S 2
with approximately equal sum of weights. Simple modifications can be made to the deterministic
algorithms to adapt them for weighted selection. In iteration j of the selection algorithms, a set
S (j) of elements is split into two subsets S (j)
1 and S (j)
2 and a count of elements is used to choose
the subset in which the desired element can be found. Weighted selection is performed as follows:
First, the elements of S (j) are divided into two subsets S (j)
1 and S (j)
2 as in the selection algorithm.
The sum of weights of all the elements in subset S (j)
1 is computed. Let k j be the weight metric
in iteration j. If k j is greater than the sum of weights of S (j)
1 , the problem reduces to performing
weighted selection with k
(j). Otherwise, we need to
perform weighted selection with k
1 . This method retains the property
that a guaranteed fraction of elements will be discarded at each iteration keeping the worst case
number of iterations to be O(log n). Therefore, both the median of medians selection algorithm
and the bucket-based selection algorithm can be used for weighted selection without any change in
their run time complexities.
The randomized selection algorithm can also be modified in the same way. However, the same
modification to the fast randomized selection will not work. This algorithm works by sorting a
sample of the data set and picking up two elements that with high probability lie on either side
of the element with rank k in sorted order. In weighted selection, the weights determine the
position of the desired element in the sorted order. Thus, one may be tempted to select a sample of
weights. However, this does not work since the weights of the elements should be considered in the
order of the sorted data and a list of the elements sorted according to the weights does not make
sense. Hence, randomized selection without load balancing is the best choice for parallel weighted
selection.
Conclusions
In this paper, we have tried to identify the selection algorithms that are most suited for fast execution
on coarse-grained distributed memory parallel computers. After surveying various algorithms,
we have identified four algorithms and have described and analyzed them in detail. We also considered
three load balancing strategies that can be used for balancing data during the execution of
the selection algorithms.
Based on the analysis and experimental results, we conclude that randomized algorithms are
faster by an order of magnitude. If determinism is desired, the bucket-based approach is superior
to the median of medians algorithm. Of the two randomized algorithms, fast randomized selection
with load balancing delivers good performance for all types of input distributions with very little
variation in the running time. The overhead of using load balancing with well-behaved data is
insignificant. Any of the load balancing techniques described can be used without significant
variation in the running time. Randomized selection performs well for well-behaved data. There is
a large variation in the running time between best and worst-case data. Load balancing does not
improve the performance of randomized selection irrespective of the input data distribution.
9
Acknowledgements
We are grateful to Northeast Parallel Architectures Center and Minnesota Supercomputing Center
for allowing us to use their CM-5. We would like to thank David Bader for providing us a copy of
his paper and the corresponding code.
--R
Deterministic selection in O(log log N) parallel time
The design and analysis of parallel algorithms
Parallel selection in O(log log n) time using O(n
An optimal algorithm for parallel selection
Practical parallel algorithms for dynamic data redistribution
Technical Report CMU-CS-90-190
Time bounds for selection
A parallel median algorithm
Introduction to algorithms.
Dynamic load balancing for distributed memory multiprocessors
Expected time bounds for selection
Selection on the Reconfigurable Mesh
An introduction to parallel algorithms
Introduction to Parallel Computing: Design and Analysis of Algorithms
Efficient computation on sparse interconnection networks
Unifying themes for parallel selection
Derivation of randomized sorting and selection algorithms
Randomized parallel selection
Programming a Hypercube Multicomputer
Efficient parallel algorithms for selection and searching on sorted matrices
Finding the median
Random Data Accesses on a Coarse-grained Parallel Machine II
Load balancing on a hypercube
--TR
--CTR
Ibraheem Al-Furaih , Srinivas Aluru , Sanjay Goil , Sanjay Ranka, Parallel Construction of Multidimensional Binary Search Trees, IEEE Transactions on Parallel and Distributed Systems, v.11 n.2, p.136-148, February 2000
David A. Bader, An improved, randomized algorithm for parallel selection with an experimental study, Journal of Parallel and Distributed Computing, v.64 n.9, p.1051-1059, September 2004
Marc Daumas , Paraskevas Evripidou, Parallel Implementations of the Selection Problem: A Case Study, International Journal of Parallel Programming, v.28 n.1, p.103-131, February 2000 | parallel algorithms;selection;parallel computers;coarse-grained;median finding;randomized algorithms;load balancing;meshes;hypercubes |
262003 | Parallel Incremental Graph Partitioning. | AbstractPartitioning graphs into equally large groups of nodes while minimizing the number of edges between different groups is an extremely important problem in parallel computing. For instance, efficiently parallelizing several scientific and engineering applications requires the partitioning of data or tasks among processors such that the computational load on each node is roughly the same, while communication is minimized. Obtaining exact solutions is computationally intractable, since graph partitioning is an NP-complete.For a large class of irregular and adaptive data parallel applications (such as adaptive graphs), the computational structure changes from one phase to another in an incremental fashion. In incremental graph-partitioning problems the partitioning of the graph needs to be updated as the graph changes over time; a small number of nodes or edges may be added or deleted at any given instant.In this paper, we use a linear programming-based method to solve the incremental graph-partitioning problem. All the steps used by our method are inherently parallel and hence our approach can be easily parallelized. By using an initial solution for the graph partitions derived from recursive spectral bisection-based methods, our methods can achieve repartitioning at considerably lower cost than can be obtained by applying recursive spectral bisection. Further, the quality of the partitioning achieved is comparable to that achieved by applying recursive spectral bisection to the incremental graphs from scratch. | Introduction
Graph partitioning is a well-known problem for which fast solutions are extremely important in parallel
computing and in research areas such as circuit partitioning for VLSI design. For instance, parallelization
of many scientific and engineering problems requires partitioning data among the processors in such a
fashion that the computation load on each node is balanced, while communication is minimized. This
is a graph-partitioning problem, where nodes of the graph represent computational tasks, and edges describe
the communication between tasks with each partition corresponding to one processor. Optimal partitioning
would allow optimal parallelization of the computations with the load balanced over various processors
and with minimized communication time. For many applications, the computational graph can be derived
only at runtime and requires that graph partitioning also be done in parallel. Since graph partitioning is
NP-complete, obtaining suboptimal solutions quickly is desirable and often satisfactory.
For a large class of irregular and adaptive data parallel applications such as adaptive meshes [2], the
computational structure changes from one phase to another in an incremental fashion. In incremental
graph-partitioning problems, the partitioning of the graph needs to be updated as the graph changes over
time; a small number of nodes or edges may be added or deleted at any given instant. A solution of the
previous graph-partitioning problem can be utilized to partition the updated graph, such that the time
required will be much less than the time required to reapply a partitioning algorithm to the entire updated
graph. If the graph is not repartitioned, it may lead to imbalance in the time required for computation on
each node and cause considerable deterioration in the overall performance. For many of these problems the
graph may be modified after every few iterations (albeit incrementally), and so the remapping must have
a lower cost relative to the computational cost of executing the few iterations for which the computational
structure remains fixed. Unless this incremental partitioning can itself be performed in parallel, it may
become a bottleneck.
Several suboptimal methods have been suggested for finding good solutions to the graph-partitioning
problem. For many applications, the computational graph is such that the vertices correspond to two-
or three-dimensional coordinates and the interaction between computations is limited to vertices that are
physically proximate. This information can be exploited to achieve the partitioning relatively quickly by
clustering physically proximate points in two or three dimensions. Important heuristics include recursive
coordinate bisection, inertial bisection, scattered decomposition, and index based partitioners [3, 6, 12, 11, 14,
16]. There are a number of methods which use explicit graph information to achieve partitioning. Important
heuristics include simulated annealing, mean field annealing, recursive spectral bisection, recursive spectral
multisection, mincut-based methods, and genetic algorithms [1, 4, 5, 7, 8, 9, 10, 13]. Since, the methods use
explicit graph information, they have wider applicability and produce better quality partitioning.
In this paper we develop methods which use explicit graph information to perform incremental graph-
partitioning. Using recursive spectral bisection, which is regarded as one of the best-known methods for
graph partitioning, our methods can partition the new graph at considerably lower cost. The quality of
partitioning achieved is close to that achieved by applying recursive spectral bisection from scratch. Further,
our algorithms are inherently parallel.
The rest of the paper is outlined as follows. Section 2 defines the incremental graph-partitioning problem.
Section 3 describes linear programming-based incremental graph partitioning. Section 4 describes a multilevel
approach to solve the linear programming-based incremental graph partitioning. Experimental results of our
methods on sample meshes are described in Section 5, and conclusions are given in Section 6.
Problem definition
Consider a graph represents a set of vertices, E represents a set of undirected edges, the
number of vertices is given by and the number of edges is given by jEj. The graph-partitioning
problem can be defined as an assignment scheme that maps vertices to partitions. We denote
by B(q) the set of vertices assigned to a partition q, i.e., qg.
The weight w i corresponds to the computation cost (or weight) of the vertex v i . The cost of an edge
given by the amount of interaction between vertices v 1 and v 2 . The weight of every partition
can be defined as
The cost of all the outgoing edges from a partition represent the total amount of communication cost
and is given by
We would like to make an assignment such that the time spent by every node is minimized, i.e.,
represents the ratio of cost of unit computation/cost of unit communication
on a machine. Assuming computational loads are nearly balanced (W (0) - W (1) -
the second term needs to be minimized. In the literature
P C(q) has also been used to represent the
communication.
Assume that a solution is available for a graph G(V; E) by using one of the many available methods in
the literature, e.g., the mapping function M is available such that
and the communication cost is close to optimal. Let G 0 be an incremental graph of G(V; E)
i.e., some vertices are added and some vertices are deleted. Similarly,
i.e., some edges are added and some are deleted. We would like to find a new mapping
that the new partitioning is as load balanced as possible and the communication cost is minimized.
The methods described in this paper assume that G 0 sufficiently similar to G(V; E) that this
can be achieved, i.e., the number of vertices and edges added/deleted are a small fraction of the original
number of vertices and edges.
3 Incremental partitioning
In this section we formulate incremental graph partitioning in terms of linear programming. A high-level
overview of the four phases of our incremental graph-partitioning algorithm is shown in Figure 1. Some
notation is in order.
Let
1. P be the number of partitions.
2. represent the set of vertices in partition i.
3. - represent the average load for each partition
The four steps are described in detail in the following sections.
1: Assign the new vertices to one of the partitions (given by M 0 ).
Step 2: Layer each partition to find the closest partition for each vertex (given by L 0 ).
Step 3: Formulate the linear programming problem based on the mapping of Step 1 and balance loads (i.e., modify M 0 ) minimizing
the total number of changes in M 0 .
Step 4: Refine the mapping in Step 2 to reduce the communication cost.
Figure
1: The different steps used in our incremental graph-partitioning algorithm.
3.1 Assigning an initial partition to the new nodes
The first step of the algorithm is to assign an initial partition to the nodes of the new graph (given by
simple method for initializing M 0 (V ) is given as follows. Let
For all the vertices
d(v; x) is the shortest distance in the graph G 0 For the examples considered in this paper we assume
that G 0 is connected. If this is not the case, several other strategies can be used.
is connected, this graph can be used instead of G for calculation of M 0 (V ).
connected, then the new nodes that are not connected to any of the old
nodes can be clustered together (into potentially disjoint clusters) and assigned to the partition that
has the least number of vertices.
For the rest of the paper we will assume that M 0 (v) can be calculated using the definition in (7), although
the strategies developed in this paper are, in general, independent of this mapping. Further, for ease of
presentation, we will assume that the edge and the vertex weights are of unit value. All of our algorithms
can be easily modified if this is not the case. Figure 2 (a) describes the mapping of each the vertices of a
graph.
Figure
(b) describes the mapping of the additional vertices using the above strategy.
3.2 Layering each partition
The above mapping would ordinarily generate partitions of unequal size. We would like to move vertices
from one partition to another to achieve load balancing, while keeping the communication cost as small as
possible. This is achieved by making sure that the vertices transferred between two partitions are close to
the boundary of the two partitions. We assign each vertex of a given partition to a different partition it is
close to (ties are broken arbitrarily).
(b)
Figure
2: (a) Initial Graph (b) Incremental Graph (New vertices are shown by "*").
where x is such that
min
is is the shortest distance in the graph between v and x.
A simple algorithm to perform the layering is given in Figure 3. It assumes the graph is connected. Let
the number of such vertices of partition i that can be moved to partition j. For the example
case of Figure 3, labels of all the vertices are given in Figure 4. A label 2 of vertex in partition 1 corresponds
to the fact that this vertex belongs to the set that contributed to ff 12 .
3.3 Load balancing
the number of vertices to be moved from partition i to partition j to achieve load balance.
There are several ways of achieving load balancing. However, since one of our goals is to minimize communication
cost, we would like to minimize
l ij , because this would correspond to a minimization of the
amount of vertex movement (or "deformity") in the original partitions. Thus the load-balancing step can be
formally defined as the following linear programming problem.
Minimize X
subject to
Constraint 12 corresponds to the load balance condition.
The above formulation is based on the assumption that changes to the original graph are small and
the initial partitioning is well balanced, hence moving the boundaries by a small amount will give balanced
partitioning with low communication cost.
f map[v[j]] represents the mapping of vertex j. g
represents the j th element of the local adjacent list in partition i. g
represents the starting address of vertex j in the local adjacent list of partition i. g
i represents the set of vertices of partition i at a distance k from a node in partition j.
f Neighbor i represents the set of partitions which have common boundaries with partition i. g
For each partition i do
For vertex do
For do
Count
if
l
Add v[j] into S (tag;0)
f where
level := 0
repeat
For do
For vertex v[j] 2 S (k;level)
do
For l /\Gamma xadj i [v[j]] to xadj do
count
Add v[j] into tmpS
level
For vertex v[j] 2 tmpS do
Add v[j] into S (tag;level)
f where count i
until
For do
0-k!level
Figure
3: Layering Algorithm
(b)
Figure
4: Labeling the nodes of a graph to the closest outside partition; (a) a microscopic view of the layering
for a graph near the boundary of three partitions; (b) layering of the graph in Figure 2 (b); no edges are
shown.
Constraints in (11):
l
Constraints in (12):
\Gammal
\Gammal
Solution using the Simplex Method
l
all other values are zero.
Figure
5: Linear programming formulation and its solution, based on the mapping of the graph in Figure 2;
(b) using the labeling information in Figure 4 (b).
There are several approaches to solving the above linear programming problem. We decided to use the
simplex method because it has been shown to work well in practice and because it can be easily parallelized. 1
The simplex formulation of the example in Figure 2 is given in Figure 5. The corresponding solution is l
and l 1. The new partitioning is given in Figure 6.
20Initial partitions
Incremental partitions
Figure
The new partition of the graph in Figure 2 (b) after the Load Balancing step.
The above set of constraints may not have a feasible solution. One approach is to relax the constraint in
(11) and not have l ij constraint. Clearly, this would achieve load balance but may lead to major
modifications in the mapping. Another approach is to replace the constraint in (12) by
Assuming would not achieve load balancing in one step, but several such steps can be
applied to do so. If a feasible solution cannot be found with a reasonable value of \Delta (within an upper bound
C), it would be better to start partitioning from scratch or solve the problem by adding only a fraction of
the nodes at a given time, i.e., solve the problem in multiple stages. Typically, such cases arise when all the
new nodes correspond to a few partitions and the amount of incremental change is greater than the size of
one partition.
3.4 Refinement of partitions
The formulation in the previous section achieves load balance but does not try explicitly to reduce the number
of cross-edges. The minimization term in (10) and the constraint in (11) indirectly keep the cross-edges to
a minimum under the assumption that the initial partition is good. In this section we describe a linear
programming-based strategy to reduce the number of cross-edges, while still maintaining the load balance.
This is achieved by finding all the vertices of partitions i on the boundary of partition i and j such that
the cost of edges to the vertices in j are larger than the cost of edges to local vertices (Figure 7), i.e., the
total cost of cross-edges will decrease by moving the vertex from partition i to j, which will affect the load
We have used a dense version of simplex algorithm. The total time can potentially be reduced by using sparse representation.
local
non-local edge to partition
non-local edge to partition = 3
(a)
Figure
7: Choosing vertices for refinement. (a) Microscopic view of a vertex which can be moved from
partition P i to P j , reducing the number of cross edges; (b) the set of vertices with the above property in the
partition of Figure 6.
balance. In the following a linear programming formulation is given that moves the vertices while keeping
the load balance.
mapping of each vertex after the load-balancing step. Let out(k;
represent the number of edges of vertex k in partition M 00 (k) connected to partition j(j 6= M 00 (k)), and let
represent the number of vertices a vertex k is connected to in partition M 00 (k). Let b ij represent the
number of vertices in partition i which have more outgoing edges to partition j than local edges.
We would like to maximize the number of vertices moved so that moving a vertex will not increase the
cost of cross-edges. The inequality in the above definition can be changed to a strict inequality. We leave
the equality, however, since by including such vertices the number of points that can be moved can be larger
(because these vertices can be moved to satisfy load balance constraints without affecting the number of
cross-edges).
The refinement problem can now be posed as the following linear programming problem:
Maximize X
such that
This refinement step can be applied iteratively until the effective gain by the movement of vertices is
small. After a few steps, the inequalities (l ij need to be replaced by strict inequalities (l ij
Constraint (15)
l
Load Balancing Constraint (16)
\Gammal
\Gammal
Solution using Simplex Method
l
Figure
8: Formulation of the refinement step using linear programming and its solution.
otherwise, vertices having an equal number of local and nonlocal vertices may move between boundaries
without reducing the total cost. The simplex formulation of the example in Figure 6 is given in Figure 8,
and the new partitioning after refinement is given in Figure 9.
20Incremental partitions
Refined partitions
Figure
9: The new partition of the graph in Figure 6 after the Refinement step.
3.5 Time complexity
Let the number of vertices and the number of edges in a graph be given by V and E, respectively. The time
for layering is O(V +E). Let the number of partitions be P and the number of edges in the partition graph 2
Each node of this graph represents a partition. An edge in the super graph is present whenever there are any cross edges
from a node of one partition to a node of another partition.
be R. The number of constraints and variables generated for linear programming are O(P +R) and O(2R),
respectively.
Thus the time required for the linear programming is O((P +R)R). Assuming R is O(P ), this reduces to
The number of iterations required for linear programming is problem dependent. We will use f(P
to denote the number of iterations. Thus the time required for the linear programming is O(P 2 f(P )). This
gives the total time for repartitioning as O(E
The parallel time is considerably more difficult to analyze. We will analyze the complexity of neglecting
the setup overhead of coarse-grained machines. The parallel time complexity of the layering step depends on
the maximum number of edges assigned to any processor. This could be approximated by O(E=P ) for each
level, assuming the changes to the graph are incremental and that the graph is much larger than the number
of processors. The parallelization of the linear programming requires a broadcast of length proportional to
O(P ). Assuming that a broadcast of size P requires b(P ) amount of time on a parallel machine with P
processors, the time complexity can be approximated by O( E
4 A multilevel approach
For small graphs a large fraction of the total time spent in the algorithm described in the previous section
will be on the linear programming formulation and its solution. Since the time required for one iteration
of the linear programming formulation is proportional to the square of the number of partitions, it can be
substantially reduced by using a multilevel approach. Consider the partitioning of an incremental graph
for partitions. This can be completed in two stages: partitioning the graph into 4 super partitions and
partitioning each of the 4 super partitions into 4 partitions each. Clearly, more than two stages can be used.
The advantage of this algorithm is that the time required for applying linear programming to each stage
would be much less than the time required for linear programming using only one stage. This is due to a
substantial reduction in the number of variables as well as in the constraints, which are directly dependent
on the number of of partitions. However, the mapping initialization and the layering needs to be performed
from scratch for each level. Thus the decrease in cost of linear programming leads to a potential increase in
the time spent in layering.
The multilevel algorithm requires combining the partitions of the original graph into super partitions.
For our implementations, recursive spectral bisection was used as an ab initio partitioning algorithm. Due
to its recursive property it creates a natural hierarchy of partitions. Figure 10 shows a two-level hierarchy
of partitions. After the linear programming-based algorithm has been applied for repartitioning a graph
that has been adapted several times, it is possible that some of the partitions corresponding to a lower level
subtree have a small number of boundary edges between them. Since the multilevel approach results in
repartitioning with a small number of partitions at the lower levels, the linear programming formulations
may produce infeasible solutions at the lower levels. This problem can be partially addressed by reconfiguring
the partitioning hierarchy.
A simple algorithm can be used to achieve reconfiguration. It tries to group proximate partitions
to form a multilevel hierarchy. At each level it tries to combine two partitions into one larger parti-
tion. Thus the number of partitions is reduced by a factor of two at every level by using a procedure
FIND UNIQUE NEIGHBOR(P ) (Figure 11), which finds a unique neighbor for each partition such that the
number of cross-edges between them is as large as possible. This is achieved by applying a simple heuristic
Figure
12) that uses a list of all the partitions in a random order (each processor has a different order). If
more than one processor is successful in generating a feasible solution, ties are broken based on the weight
and the processor number. The result of the merging is broadcast to all the processors. In case none of the
Figure
10: A two-level hierarchy of 16 partitions
partitions g
represents the number of edges from partition i to partition j. g
global success := FALSE
trial := 0
While (not global success) and (trial ! T ) do
For each processor i do
list of all partitions in a random order
Weight := 0
FIND PAIR(success;Mark;Weight; Edge)
global success := GLOBAL OR(success)
if (not global success) then
FIX PAIR(success;Mark;Weight; Edge)
global success := GLOBAL OR(success)
if (global success) then
winner := FIND WINNER(success;Weight)
f Return the processor number of maximum Weight g
f Processor winner broadcast Mark to all the processors g
return(global success)
else
trial := trial+1
Figure
Reconstruction Algorithm
FIND PAIR(success; Mark;W eight; Edge)
success := TRUE
for
Find a neighbor k of j where (Mark[k] !
if k exists then
Weight
else
success := FALSE
FIX PAIR(success; Mark;W eight; Edge)
success := TRUE
While (j ! P ) and (success) do
if a x exists such that (Mark[x] ! 0), (x is a neighbor of l), is a neighbor of
Mark[x] := l Mark[l] := x
Weight
else
success := FALSE
else
Figure
12: A high level description of the procedures used in FIND UNIQUE NEIGHBOR.
processors are successful, another heuristic (Figure 12) is applied that tries to modify the partial assignments
made by heuristic 1 to find a neighbor for each partition. If none of the processors are able to find a feasible
solution, each processor starts with another random solution and the above step is iterated a constant
number (L) times. 3 Figure 11 shows the partition reconfiguration for a simple example.
If the reconfiguration algorithm fails, the multilevel algorithm can be applied with a lower number of
levels (or only one level).
Random_list
Random_list
Random_list
Random_list
(a) (b) (a)
(d) (e) (f)
Figure
13: A working example of the reconstruction algorithm. (a) Graph with 4 partitions; (b) partition
(c) adjacency lists; (d) random order lists; (e) partition rearrangement; (f) processor 3 broadcasts
the result to the other processors; (g) hierarchy after reconfiguration.
3 In practice, we found that the algorithm never requires more than one iteration.
4.1 Time complexity
In the following we provide an analysis assuming that reconfiguration is not required. The complexity of
reconfiguration will be discussed later. For the multilevel approach we assume that at each level the number
of partitions done is equal and given by k. Thus the number of levels generated is log k P . The time required
for layering increases to O(Elog k P ). The number of linear programming formulations can be given by
O( P
Thus the total time for linear programming can be given by O( P
f(k)). The total time required
for repartitioning is given by O(Elog k P value of k would minimize the sum of
the cost of layering and the cost of the linear programming formulation. The choice of k also depends on the
quality of partitioning achieved; increasing the number of layers would, in general, have a deteriorating effect
on the quality of partitioning. Thus values of k have to be chosen based on the above tradeoffs. However, the
analysis suggests that for reasonably sized graphs the layering time would dominate the total time. Since the
layering time is bounded by O(ElogP ), this time is considerably lower than applying spectral bisection-based
methods from scratch.
Parallel time is considerably more difficult to analyze. The parallel time complexity of the layering step
depends on the maximum number of edges any processor has for each level. This can be approximated by
each level, assuming the changes to the graph are incremental and that the graph is much larger
than the number of processors. As discussed earlier, the parallelization of linear programming requires a
broadcast of length proportional to O(k). For small values of k, each linear programming formulation has to
be executed on only one processor, else the communication will dominate the total time. Thus the parallel
time is proportional to O( E
The above analysis did not take reconfiguration into account. The cost of reconfiguration requires O(kd 2 )
time in parallel for every iteration, where d is the average number of partitions to which every partition is
connected. The total time is O(kd 2 log P ) for the reconfiguration. This time should not dominate the total
time required by the linear programming algorithm.
5 Experimental results
In this section we present experimental results of the linear programming-based incremental partitioning
methods presented in the previous section. We will use the term "incremental graph partitioner" (IGP) to
refer to the linear programming based algorithm. All our experiments were conducted on the 32-node CM-5
available at NPAC at Syracuse University.
Meshes
We used two sets of adaptive meshes for our experiments. These meshes were generated using the DIME
environment [15]. The initial mesh of Set A is given in Figure 14 (a). The other incremental meshes are
generated by making refinements in a localized area of the initial mesh. These meshes represent a sequence
of refinements in a localized area. The number of nodes in the meshes are 1071, 1096, 1121, 1152, and 1192,
respectively.
The partitioning of the initial mesh (1071 nodes) was determined using recursive spectral bisection. This
was the partitioning used by algorithm IGP to determine the partitioning of the incremental mesh (1096
nodes). The repartitioning of the next set of refinement (1121, 1152, and 1192 nodes, respectively) was
achieved using the partitioning obtained by using the IGP for the previous mesh in the sequence. These
meshes are used to test whether IGP is suitable for repartitioning a mesh after several refinements.
Figure
14: Test graphs set A (a) an irregular graph with 1071 nodes and 3185 edges; (b) graph in (a) with
additional nodes; (c) graph in (b) with 25 additional nodes; (d) graph in (c) with 31 additional nodes;
graph in (d) with 40 additional nodes.
Figure
15: Test graphs Set B (a) a mesh with 10166 nodes and 30471 edges; (b) mesh a with 48 additional
nodes; (c) mesh a with 139 additional nodes; (d) mesh a with 229 additional nodes; (e) mesh a with 672
Results
Initial Graph - Figure 14 (a)
Total Cutset Max Cutset Min Cutset
Figure 14 (b)
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 31.71 - 733 56 33
IGP 14.75 0.68 747 55 34
IGP with Refinement 16.87 0.88 730 54 34
Figure 14 (c)
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 34.05 - 732 56 34
IGP 13.63 0.73 752 54 33
IGP with Refinement 16.42 1.05 727 54 33
Figure 14 (d)
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 34.96 - 716 57 34
IGP 15.89 0.92 757 56 33
IGP with Refinement 18.32 1.28 741 56 33
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 38.20 - 774 63 34
IGP 15.69 0.94 815 63 34
IGP with Refinement 18.43 1.26 779 59 34
Time unit in seconds. p - parallel timing on a 32-node CM-5. s - timing on 1-node CM-5.
Figure
Incremental graph partitioning using linear programming and its comparison with spectral bisection
from scratch for meshes in Figure 14 (Set A).
The next data set (Set B) corresponds to highly irregular meshes with 10166 nodes and 30471 edges.
This data set was generated to study the effect of different amounts of new data added to the original mesh.
Figures
17 (b), 17 (c), 17 (d), and 17 (e) correspond to meshes with 68, 139, 229, and 672 additional nodes
over the mesh in Figure 15.
The results of the one-level IGP for Set A meshes are presented in Figure 16. The results show that, even
after multiple refinements, the quality of partitioning achieved is comparable to that achieved by recursive
spectral bisection from scratch, thus this method can be used for repartitioning several stages. The time
required by repartitioning is about half the time required for partitioning using RSB. The algorithm provides
speedup of around 15 to 20 on a 32-node CM-5. Most of the time spent by our algorithm is in the solution
of the linear programming formulation using the simplex method. The number of variables and constraints
generated by the one-level linear programming algorithm for the load-balancing step for meshes in Figure
partitions are 188 and 126, respectively.
For the multilevel approach, the linear programming formulation for each subproblem at a given level
Initial Graph - Figure 15 (a)
Total Cutset Max Cutset Min Cutset
(b) Initial assignment by IGP using the partition of Figure 15 (a')
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 800.05 - 2137 178 90
IGP before Refinement 13.90 1.01 2139 186 84
IGP after Refinement 24.07 1.83 2040 172 82
(c) Initial assignment by IGP using the partition of Figure 15 (a')
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 814.36 - 2099 166 87
IGP before Refinement 18.89 1.08 2295 219 93
IGP after Refinement 29.33 2.01 2162 206 85
(d) Initial assignment by IGP using the partition of Figure 15 (a')
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 853.35 - 2057 169 94
IGP before Refinement (2) 35.98 2.08 2418 256 92
IGP after Refinement 43.86 2.76 2139 190 85
Initial assignment by IGP using the partition of Figure 15 (a')
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 904.81 - 2158 158 94
IGP before Refinement (3) 76.78 3.66 2572 301 102
IGP after Refinement 89.48 4.39 2270 237 96
Time unit in seconds. p - parallel timing on a 32-node CM-5. s - timing on 1-node CM-5.
Figure
17: Incremental graph partitioning using linear programming and its comparison with spectral bisection
from scratch for meshes in Figure 15 (Set B).
was solved by assigning a subset of processors. Table 19 gives the time required for different algorithms and
the quality of partitioning achieved for different numbers of levels. A 4 \Theta 4 \Theta 2-based repartitioning implies
that the repartitioning was performed in three stages with decomposition into 4, 4, 2 partitions, respectively.
The results are presented in Figure 19. The solution qualities of multilevel algorithms show an insignificant
deterioration in number of cross edges and a considerable reduction in total time.
The partitioning achieved by algorithm IGP for Set B meshes in Figure 18 using the partition of mesh
in
Figure
15 (a) is given in Figure 17. The number of stages required (by choosing an appropriate value of
\Delta, as described in section 2.3) were 1, 1, 2, and 3, respectively. 4 It is worth noting that, although the load
imbalance created by the additional nodes was severe, the quality of partitioning achieved for each case was
close to that of applying recursive spectral bisection from scratch. Further, the sequential time is at least
an order of magnitude better than that of recursive spectral bisection. The CM-5 implementation improved
the time required by a factor of 15 to 20. The time required for repartitioning Figure 17 (b) and Figure 17
(c) is close to that required for meshes in Figure 14. The timings for meshes in Figure 17 (d) and 17 (e) are
larger because they use multiple stages. The time can be reduced by using a multilevel approach (Figure 20).
However, the time reduction is relatively small (from 24.07 seconds to 6.70 seconds for a two-level approach).
Increasing the number of levels increases the total time as the cost of layering increases. The time reduction
for the last mesh (10838 nodes) is largely due to the reduction of the number of stages used in the multilevel
algorithm (Section 3.3). For almost all cases a speedup of 15 to 25 was achieved on a 32-node CM-5.
Figure
21 and Figure 22 show the detailed timing for different steps for the mesh in Figure 14 (d) and
mesh in Figure 15 (b) of the sequential and parallel versions of the repartitioning algorithm, respectively.
Clearly, the time spent in reconfiguration is negligible compared to the total execution time. Also, the time
spent for linear programming in a multilevel algorithm is much less than that in a single-level algorithm.
The results also show that the time for the linear programming remains approximately the same for both
meshes, while the time for layering is proportionally larger. For the multilevel parallel algorithm, the time for
layering is comparable with the time spent on linear programming for the smaller mesh, while it dominates
the time for the larger mesh. Since the layering term is O(levels E
results support the complexity
analysis in the previous section. The time spent on reconfiguration is negligible compared to the total time.
6 Conclusions
In this paper we have presented novel linear programming-based formulations for solving incremental graph-partitioning
problems. The quality of partitioning produced by our methods is close to that achieved by
applying the best partitioning methods from scratch. Further, the time needed is a small fraction of the
latter and our algorithms are inherently parallel. We believe the methods described in this paper are of
critical importance in the parallelization of adaptive and incremental problems.
4 The number of stages chosen were by trial and error, but can be determined by the load imbalance.
(b
Figure
Partitions using RSB; (b 0 ) partitions using IGP starting from a using IGP
starting from a 0 ; (d 0 ) partitions using IGP starting from a 0 ; using IGP starting from a 0 .
Graph Level Description Time-s Time-p Total Cutset
Time unit in seconds on CM-5.
Figure
19: Incremental multilevel graph partitioning using linear programming and its comparison with
single-level graph partitioning for the sequence of graphs in Figure 14.
Graph Level Description Time-s Time-p Total Cutset
Time unit in seconds on CM-5.
Figure
20: Incremental multilevel graph partitioning using linear programming and its comparison with
single-level graph partitioning for the sequence of meshes in Figure 15.
in Figure 14 (d)
Level Reconfiguration Layering Linear programming Total
Figure 15 (b)
Level Reconfiguration Layering Linear programming Total
Time in seconds
Balancing. R - Refinement. T - Total.
Figure
21: Time required for different steps in the sequential repartitioning algorithm.
in Figure 14 (d)
Level Reconfiguration Layering Linear programming Data movement Total
Figure 15 (b)
Level Reconfiguration Layering Linear programming Data movement Total
Time in seconds
Balancing. R - Refinement. T - Total.
Figure
22: Time required for different steps in the parallel repartitioning algorithm (on a 32-node CM-5).
--R
Solving Problems on Concurrent Processors
Software Support for Irregular and Loosely Synchronous Problems.
Heuristic Approaches to Task Allocation for Parallel Computing.
Load Balancing Loosely Synchronous Problems with a Neural Network.
Solving Problems on Concurrent Processors
Graphical Approach to Load Balancing and Sparse Matrix Vector Multiplication on the Hypercube.
An Improved Spectral Graph Partitioning Algorithm for Mapping Parallel Computations.
Multidimensional Spectral Load Balancing.
Genetic Algorithms for Graph Partitioning and Incremental Graph Partitioning.
Physical Optimization Algorithms for Mapping Data to Distributed-Memory Multi- processors
Solving Finite Element Equations on Current Computers.
Fast Mapping And Remapping Algorithm For Irregular and Adaptive Problems.
Partitioning Sparse Matrices with Eigenvectors of Graphs.
Partitioning of Unstructured Mesh Problems for Parallel Processing.
DIME: Distributed Irregular Mesh Enviroment.
Performance of Dynamic Load-Balancing Algorithm for Unstructured Mesh Calcula- tions
--TR
--CTR
Sung-Ho Woo , Sung-Bong Yang, An improved network clustering method for I/O-efficient query processing, Proceedings of the 8th ACM international symposium on Advances in geographic information systems, p.62-68, November 06-11, 2000, Washington, D.C., United States
Mark J. Clement , Glenn M. Judd , Bryan S. Morse , J. Kelly Flanagan, Performance Surface Prediction for WAN-Based Clusters, The Journal of Supercomputing, v.13 n.3, p.267-281, May 1999
Don-Lin Yang , Yeh-Ching Chung , Chih-Chang Chen , Ching-Jung Liao, A Dynamic Diffusion Optimization Method for Irregular Finite Element Graph Partitioning, The Journal of Supercomputing, v.17 n.1, p.91-110, Aug. 2000
Ching-Jung Liao , Yeh-Ching Chung, Tree-Based Parallel Load-Balancing Methods for Solution-Adaptive Finite Element Graphs on Distributed Memory Multicomputers, IEEE Transactions on Parallel and Distributed Systems, v.10 n.4, p.360-370, April 1999
John C. S. Lui , M. F. Chan, An Efficient Partitioning Algorithm for Distributed Virtual Environment Systems, IEEE Transactions on Parallel and Distributed Systems, v.13 n.3, p.193-211, March 2002
Yeh-Ching Chung , Ching-Jung Liao , Don-Lin Yang, A Prefix Code Matching Parallel Load-Balancing Method for Solution-Adaptive Unstructured Finite Element Graphs on Distributed Memory Multicomputers, The Journal of Supercomputing, v.15 n.1, p.25-49, Jan. 2000
Umit Catalyurek , Cevdet Aykanat, A hypergraph-partitioning approach for coarse-grain decomposition, Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), p.28-28, November 10-16, 2001, Denver, Colorado
Umit Catalyurek , Cevdet Aykanat, Hypergraph-Partitioning-Based Decomposition for Parallel Sparse-Matrix Vector Multiplication, IEEE Transactions on Parallel and Distributed Systems, v.10 n.7, p.673-693, July 1999
Cevdet Aykanat , B. Barla Cambazoglu , Ferit Findik , Tahsin Kurc, Adaptive decomposition and remapping algorithms for object-space-parallel direct volume rendering of unstructured grids, Journal of Parallel and Distributed Computing, v.67 n.1, p.77-99, January, 2007
Y. F. Hu , R. J. Blake, Load balancing for unstructured mesh applications, Progress in computer research, Nova Science Publishers, Inc., Commack, NY, 2001 | remapping;mapping;refinement;parallel;linear-programming |
262369 | Computing Accumulated Delays in Real-time Systems. | We present a verification algorithm for duration properties of real-time systems. While simple real-time properties constrain the total elapsed time between events, duration properties constrain the accumulated satisfaction time of state predicates. We formalize the concept of durations by introducing duration measures for timed automata. A duration measure assigns to each finite run of a timed automaton a real number the duration of the run which may be the accumulated satisfaction time of a state predicate along the run. Given a timed automaton with a duration measure, an initial and a final state, and an arithmetic constraint, the duration-bounded reachability problem asks if there is a run of the automaton from the initial state to the final state such that the duration of the run satisfies the constraint. Our main result is an (optimal) PSPACE decision procedure for the duration-bounded reachability problem. | Introduction
Over the past decade, model checking [CE81, QS81] has emerged as a powerful tool for the automatic
verification of finite-state systems. Recently the model-checking paradigm has been extended to
real-time systems [ACD93, HNSY94, AFH96]. Thus, given the description of a finite-state system
together with its timing assumptions, there are algorithms that test whether the system satisfies
a specification written in a real-time temporal logic. A typical property that can be specified in
real-time temporal logics is the following time-bounded causality property:
A response is obtained whenever a ringer has been pressed continuously for 2 seconds.
Standard real-time temporal logics [AH92], however, have limited expressiveness and cannot specify
some timing properties we may want to verify of a given system. In particular, they do not allow
us to constrain the accumulated satisfaction times of state predicates. As an example, consider the
following duration-bounded causality property:
A response is obtained whenever a ringer has been pressed, possibly intermittently, for
a total duration of 2 seconds. ( )
A preliminary version of this paper appeared in the Proceedings of the Fifth International Conference on
Computer-Aided Verification (CAV 93), Springer-Verlag LNCS 818, pp. 181-193, 1993.
y Bell Laboratories, Murray Hill, New Jersey, U.S.A.
z Department of Computer Science, University of Crete, and Institute of Computer Science, FORTH, Greece.
Partially supported by the BRA ESPRIT project REACT.
x Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, U.S.A. Partially
supported by the ONR YIP award N00014-95-1-0520, by the NSF CAREER award CCR-9501708, by the
NSF grants CCR-9200794 and CCR-9504469, by the AFOSR contract F49620-93-1-0056, and by the ARPA grant
NAG2-892.
To specify this duration property, we need to measure the accumulated time spent in the state
that models "the ringer is pressed." For this purpose, the concept of duration operators on state
predicates is introduced in the Calculus of Durations [CHR91]. There, an axiom system is given
for proving duration properties of real-time systems.
Here we address the algorithmic verification problem for duration properties of real-time sys-
tems. We use the formalism of timed automata [AD94] for representing real-time systems. A timed
automaton operates with a finite control and a finite number of fictitious time gauges called clocks,
which allow the annotation of the control graph with timing constraints. The state of a timed
automaton includes, apart from the location of the control, also the real-numbered values of all
clocks. Consequently, the state space of a timed automaton is infinite, and this complicates its
analysis. The basic question about a timed automaton is the following time-bounded reachability
problem:
Given an initial state oe, a final state - , and an interval I , is there a run of the automaton
starting in state oe and ending in state - such that the total elapsed time of the run is
in the interval I? (y)
The solution to this problem relies on a partition of the infinite state space into finitely many regions,
which are connected with transition and time edges to form the region graph of the timed automaton
[AD94]. The states within a region are equivalent with respect to many standard questions. In
particular, the region graph can be used for testing the emptiness of a timed automaton [AD94], for
checking time-bounded branching properties [ACD93], for testing the bisimilarity of states [Cer92],
and for computing lower and upper bounds on time delays [CY91]. Unfortunately, the region graph
is not adequate for checking duration properties such as the duration-bounded causality property
( ); that is, of two runs that start in different states within the same region, one may satisfy the
duration-bounded causality property, whereas the other one does not. Hence a new technique is
needed for analyzing duration properties.
To introduce the concept of durations in a timed automaton, we associate with every finite
run a nonnegative real number, which is called the duration of the run. The duration of a run is
defined inductively using a duration measure, which is a function that maps the control locations
to nonnegative integers: the duration of an empty run is 0; and the duration measure of a location
gives the rate at which the duration of a run increases while the automaton control resides in
that location. For example, a duration measure of 0 means that the duration of the run stays
unchanged (i.e., the time spent in the location is not accumulated), a duration measure of 1 means
that the duration of the run increases at the rate of time (i.e., the time spent in the location
is accumulated), and a duration measure of 2 means that the duration of the run increases at
twice the rate of time. The time-bounded reachability problem (y) can now be generalized to the
duration-bounded reachability problem:
Given an initial state oe, a final state - , a duration measure, and an interval I , is there
a run of the automaton starting in state oe and ending in state - such that the duration
of the run is in the interval I?
We show that the duration-bounded reachability problem is Pspace-complete, and we provide an
optimal solution. Our algorithm can be used to verify duration properties of real-time systems that
are modeled as timed automata, such as the duration-bounded causality property ( ).
Let us briefly outline our construction. Given a region R, a final state - , and a path in the
region graph from R to - , we show that the lower and upper bounds on the durations of all runs
that start at some state in R and follow the chosen path can be written as linear expressions over
the variables that represent the clock values of the start state. In a first step, we provide a recipe
for computing these so-called bound expressions. In the next step, we define an infinite graph,
the bounds graph, whose vertices are regions tagged with bound expressions that specify the set of
possible durations for any path to the final state. In the final step, we show that the infinite bounds
graph can be collapsed into a finite graph for solving the duration-bounded reachability problem.
2 The Duration-bounded Reachability Problem
Timed automata
Timed automata are a formal model for real-time systems [Dil89, AD94]. Each automaton has a
finite set of control locations and a finite set of real-valued clocks. All clocks proceed at the same
rate, and thus each clock measures the amount of time that has elapsed since it was started. A
transition of a timed automaton can be taken only if the current clock values satisfy the constraint
that is associated with the transition. When taken, the transition changes the control location of
the automaton and restarts one of the clocks.
Formally, a timed automaton A is a triple (S; X; E) with the following components:
ffl S is a finite set of locations ;
ffl X is a finite set of clocks ;
ffl E is a finite set of transitions of the form (s; t; '; x), for a source location s 2 S, a target
location t 2 S, a clock constraint ', and a clock x 2 X . Each clock constraint is a positive
boolean combination of atomic formulas of the form y - k or y ! k or k - y or k ! y, for a
clock y 2 X and a nonnegative integer constant k 2 N.
A configuration of the timed automaton A is fully described by specifying the location of the control
and the values of all clocks. A clock valuation c 2 R X is an assignment of nonnegative reals to the
clocks in X . A state oe of A is a pair (s; c) consisting of a location s 2 S and a clock valuation c.
We write \Sigma for the (infinite) set of states of A. As time elapses, the values of all clocks increase
uniformly with time, thereby changing the state of A. Thus, if the state of A is (s; c), then after
time assuming that no transition occurs, the state of A is (s; c is the
clock valuation that assigns c(x) + ffi to each clock x. The state of A may also change because of
a transition (s; t; '; x) in E. Such a transition can be taken only in a state whose location is s
and whose clock valuation satisfies the constraint '. The transition is instantaneous. After the
transition, the automaton is in a state with location t and the new clock valuation is c[x := 0]; that
is, the clock x associated with the transition is reset to the value 0, and all other clocks remain
unchanged.
The possible behaviors of the timed automaton A are defined through a successor relation on
the states of A:
Transition successor For all states (s; c) 2 \Sigma and transitions (s; t; '; x) 2 E, if c satisfies ', then
(s; c) 0
Time successor For all states (s; c) 2 \Sigma and time increments
A state (t; d) is a successor of the state (s; c), written (s; c) ! (t; d), iff there exists a nonnegative
real ffi such that (s; c) ffi
d). The successor relation defines an infinite graph K(A) on the state
space \Sigma of A. The transitive closure ! of the successor relation ! is called the reachability
relation of A.
s
(y - 2; y)
Figure
1: Sample timed automaton
Example 1 A sample timed automaton is shown in Figure 1. The automaton has four locations
and two clocks. Each edge is labeled with a clock constraint and the clock to be reset. A state of
the automaton contains a location and real-numbered values for the clocks x and y. Some sample
pairs in the successor relation are shown below:
(s; 0;
0):Depending on the application, a timed automaton may be augmented with additional components
such as initial locations, accepting locations, transition labels for synchronization with other timed
automata, and atomic propositions as location labels. It is also useful to label each location with a
clock constraint that limits the amount of time spent in that location [HNSY94]. We have chosen a
very simple definition of timed automata to illustrate the essential computational aspects of solving
reachability problems. Also, the standard definition of a timed automaton allows a (possibly empty)
set of clocks to be reset with each transition. Our requirement that precisely one clock is reset with
each transition does not affect the expressiveness of timed automata.
Clock regions and the region graph
Let us review the standard method for analyzing timed automata. The key to solving many
verification problems for a timed automaton is the construction of the so-called region graph [AD94].
The region graph of a timed automaton is a finite quotient of the infinite state graph that retains
enough information for answering certain reachability questions.
Suppose that we are given a timed automaton A and an equivalence relation - = on the states
\Sigma of A. For oe 2 \Sigma, we write for the equivalence class of states that contains the state
oe. The successor relation ! is extended to -equivalence classes as follows: define
there is a state oe 0 2 nonnegative real ffi such that oe 0 ffi
nonnegative reals " ! ffi, we have (oe The quotient graph of A with respect to
the equivalence relation - =, written is a graph whose vertices are the -equivalence classes
and whose edges are given by the successor relation ! . The equivalence relation - = is stable iff
there is a state - 0 2
is back stable iff whenever oe ! - , then for all states - there is a state oe 0 2
. The quotient graph with respect to a (back) stable equivalence relation can be used for
solving the reachability problem between equivalence classes: given two -equivalence classes R 0
and R f , is there a state oe 2 R 0 and a state - 2 R f such that oe ! -? If the equivalence relation
is (back) stable, then the answer to the reachability problem is affirmative iff there is a path from
R 0 to R f in the quotient graph
The region graph of the timed automaton A is a quotient graph of A with respect to the
equivalence relation defined below. For x 2 X , let m x be the largest constant that the
clock x is compared to in any clock constraint of A. For denote the integral part of ffi,
and let -
denote the fractional part of ffi (thus,
We freely use constraints like -
for a clock x and a nonnegative integer constant k (e.g., a clock valuation c satisfies the
constraint bxc - k iff bc(x)c - k). Two states (s; c) and (t; d) of A are region-equivalent, written
(s; c) - (t; d), iff the following four conditions hold:
1.
2. for each clock x 2 X , either
3. for all clocks x; y 2 X , the valuation c satisfies -
the valuation d satisfies -
4. for each clock x 2 X , the valuation c satisfies - the valuation d satisfies -
A (clock) region R ' \Sigma is a -equivalence class of states. Hence, a region is fully specified by
a location, the integral parts of all clock values, and the ordering of the fractional parts of the
clock values. For instance, if X contains three clocks, x, y, and z, then the region
contains all states (s; c) such that c satisfies
z ! 1. For the region R, we write [s;
z], and we say
that R has the location s and satisfies the constraints etc. There are only finitely many
regions, because the exact value of the integral part of a clock x is recorded only if it is smaller than
. The number of regions is bounded by jSj is the number
of clocks. The region graph R(A) of the timed automaton A is the (finite) quotient graph of A
with respect to the region equivalence relation -. The region equivalence relation - is stable as
well as back-stable [AD94]. Hence the region graph can be used for solving reachability problems
between regions, and also for solving time-bounded reachability problems [ACD93].
It is useful to define the edges of the region graph explicitly. A region R is a boundary region
iff there is some clock x such that R satisfies the constraint -
region that is not a boundary
region is called an open region. For a boundary region R, we define its predecessor region pred(R)
to be the open region Q such that for all states (s; c) 2 Q, there is a time increment ffi 2 R such
that (s; c and for all nonnegative reals Similarly, we define
the successor region succ(R) of R to be the open region Q 0 such that for all states (s; c) 2 Q
there is a time increment ffi 2 R such that (s; and for all nonnegative reals
we have (s; . The state of a timed automaton belongs to a boundary region R only
instantaneously. Just before that instant the state belongs to pred(R), and just after that instant
the state belongs to succ(R). For example, for the boundary region R given by
pred(R) is the open region
and succ(R) is the open region
z]:
The edges of the region graph R(A) fall into two categories:
Transition edges If (s; c) 0
then there is an edge from the region [s; c] - to the region [t; d] - .
Time edges For each boundary region R, there is an edge from pred(R) to R, and an edge from
R to succ(R).
In addition, each region has a self-loop, which can be ignored for solving reachability problems.
Duration measures and duration-bounded reachability
A duration measure for the timed automaton A is a function p from the locations of A to the
nonnegative integers. A duration constraint for A is of the form
R
is a duration
measure for A and I is a bounded interval of the nonnegative real line whose endpoints are integers
may be open, half-open, or closed).
Let p be a duration measure for A. We extend the state space of A to evaluate the integral
R
along the runs of A. An extended state of A is a pair (oe; ") consisting of a state oe of A and a
nonnegative real number ". The successor relation on states is extended as follows:
Transition successor For all extended states (s; c; ") and all transitions (s; t; '; x) such that
c satisfies ', define (s; c; '') 0
Time successor For all extended states (s; c; ") and all time increments
(s; c; ") ffi
We consider the duration-bounded reachability problem between regions: given two regions R 0 and
R f of a timed automaton A, and a duration constraint
R
I for A, is there a state oe 2 R 0 , a state
nonnegative real I such that (oe; We refer to this duration-bounded
reachability problem using the tuple
R
Example 2 Recall the sample timed automaton from Figure 1. Suppose that the duration measure
p is defined by 1. Let the initial region R 0 be the singleton
let the final region R f be f(s; 0)g. For the duration constraint
R
the answer to the duration-bounded reachability problem is in the affirmative, and the
following sequence of successor pairs is a justification (the last component denotes the value of the
integral R
(s; 0; 0;
On the other hand, for the duration constraint R
the answer to the duration-bounded
reachability problem is negative. The reader can verify that if (s; 0; 0;
2. 2
If the duration measure p is the constant function 1 (i.e., locations s), then
the integral
R
measures the total elapsed time, and the duration-bounded reachability problem
between regions is called a time-bounded reachability problem. In this case, if (oe;
some I , then for all states oe 0 2 [oe] - there is a state - 0 2 [- and a real number I such that
Hence, the region graph suffices to solve the time-bounded reachability problem.
This, however, is not true for general duration measures.
3 A Solution to the Duration-bounded Reachability Problem
Bound-labeled regions and the bounds graph
Consider a timed automaton A, two regions R 0 and R f , and a duration measure p. We determine
the set I of possible values of ffi such that (oe; To compute
the lower and upper bounds on the integral
R
along a path of the region graph, we refine the graph
by labeling all regions with expressions that specify the extremal values of the integral.
We define an infinite graph with vertices of the form (R; L; l; U; u), where R is a region, L and
U are linear expressions over the clock variables, and l and u are boolean values. The intended
meaning of the bound expressions L and U is that in moving from a state (s; c) 2 R to a state
in the final region R f , the set of possible values of the integral
R
p has the infimum L and the
supremum U , both of which are functions of the current clock values c. If the bit l is 0, then the
infimum L is included in the set of possible values of the integral, and if l is 1, then L is excluded.
Similarly, if the bit u is 0, then the supremum U is included in the set of possible values of R
and if u is 1, then U is excluded. For example, if then the left-closed right-open
interval [L; U) gives the possible values of the integral R
p.
The bound expressions L and U associated with the region R have a special form. Suppose
that is the set of clocks and that for all states (s; c) 2 R, the clock valuation c
is the clock with the smallest fractional part in R, and
x n is the clock with the largest fractional part. The fractional parts of all n clocks partition the
unit interval into represented by the expressions e
x
x n .
A bound expression for R is a positive linear combination of the expressions e that is, a
bound expression for R has the form a 0 are nonnegative integer
constants. We denote bound expressions by (n + 1)-tuples of coefficients and write (a
the bound expression a . For a bound expression e and a clock valuation c,
we to denote the result of evaluating e using the clock values given by c. When time
advances, the value of a bound expression changes at the rate a 0 \Gamma a n . If the region R satisfies
the constraint -
is a boundary region), then the coefficient a 0 is irrelevant, and if R
then the coefficient a i is irrelevant. Henceforth, we assume that all irrelevant
coefficients are set to 0.
A bound-labeled region (R; L; l; U; u) of the timed automaton A consists of a clock region R of
A, two bound expressions L and U for R, and two bits l; u 2 f0; 1g. We construct B p;R f
(A), the
bounds graph of A for the duration measure p and the final region R f . The vertices of B p;R f (A) are
the bound-labeled regions of A and the special vertex R f , which has no outgoing edges. We first
define the edges with the target R f , and then the edges between bound-labeled regions.
The edges with the target R f correspond to runs of the automaton that reach a state in R f
without passing through other regions. Suppose that R f is an open region with the duration
measure a (i.e., a for the location s of R f ). The final region R f is reachable from a state
(s; c) 2 R f by remaining in R f for at least 0 and at most
units. Since the integral
R
p increases at the rate a, the lower bound on the integral value over all states (s; c) 2 R f is 0,
z -
x
a 1 a 2 a 3 a 1 a 2 a 3 a 1 + a
Figure
2:
and the upper bound is a
While the lower bound 0 is a possible value of the integral, if
a ? 0, then the upper bound is only a supremum of all possible values. Hence, we add an edge in
the bounds graph to R f from (R f ; L; 0; U; u) for
if
If R f is a boundary region, no time can be spent in R f , and both bounds are 0. In this case, we
add an edge to R f from (R f ; L; 0; U;
Now let us look at paths that reach the final region R f by passing through other regions. For
each edge from R to R 0 in the region graph R(A), the bounds graph B p;R f (A) has exactly one edge
to each bound-labeled region of the form (R bound-labeled region of the
form (R; L; l; U; u). First, let us consider an example to understand the determination of the lower
bound L and the corresponding bit l (the upper bound U and the bit u are determined similarly).
Suppose that and that the boundary region R 1 , which satisfies
labeled with the lower bound L and the bit l 1 . This means that starting from a
state (s; c) 2 R 1 , the lower bound on the integral R
reaching some state in R f is
Consider the open predecessor region R 2 of R 1 , which satisfies
x. Let a be the duration
measure of R 2 . There is a time edge from R 2 to R 1 in the region graph. We want to compute the
lower-bound label L 2 for R 2 from the lower-bound label L 1 of R 1 . Starting in a state (s; c) 2 R 2 ,
the state (s; c reached after time
Furthermore, from the state (s; c) 2 R 2 the integral R
p has the value [[a
before entering
the region R 1 . Hence, the new lower bound is
and the label L 2 is (a 1 ; a 2 ; a 3 ; a 1 +a). See Figure 2. Whether the lower bound L 2 is a possible value
of the integral depends on whether the original lower bound L 1 is a possible value of the integral
starting in R 1 . Thus, the bit l 2 labeling R 2 is the same as the bit l 1 labeling R 1 .
Next, consider the boundary region R 3 such that R 2 is the successor region of R 3 . The region
x, and there is a time edge from R 3 to R 2 in the region graph. The reader
can verify that the updated lower-bound label L 3 of R 3 is the same as L 2 , namely (a 1 ; a 2 ; a 3 ; a 1 +a),
which can be simplified to (0; a region. See Figure 3. The
updated bit l 3 of R 3 is the same as l 2 .
z - x
y
a 1 a 2 a 1 + a
a 3 a 2 a 3 a 1 + a
Figure
3: (0 ! -
z -
x
immediate:
delayed:
a 2 a 3 a 1 + a
a 2
a 2
a 3 a 3
a 3 a 3
Figure
4:
The process repeats if we consider further time edges, so let us consider a transition edge from
region R 4 to region R 3 , which resets the clock y. We assume that the region R 4 is open with
duration measure b, and that R 4 satisfies
x. Consider a state (t; d) 2 R 4 . Suppose
that the transition happens after time ffi; then
. If the state after the transition is
(s; c) 2 R 3 , then
ffi. The lower bound L 4 corresponding
to this scenario is the value of the integral before the transition, which is b \Delta ffi, added to the value
of the lower bound L 3 at the state (s; c), which is
z
To obtain the value of the lower bound L 4 at the state (t; d), we need to compute the infimum over
all choices of ffi , for
. Hence, the desired lower bound is
z
After substituting
simplifies to
z
The infimum of the monotonic function in ffi is reached at one of the two extreme points. When
(i.e., the transition occurs immediately), then the value of L 4 at (t; d) is
z
When
d (i.e., the transition occurs as late as possible), then the value of L 4 at
(t; d) is
z
y), the lower-bound label L 4 for R 4
is (a 2 ; a 3 ; a 3 ; a 4 ), where a 4 is the minimum of a 1 + a and a 2 Figure 4. Finally, we need to
x
a 2 a 3 a 3 a 4
a 2 a 3 a 4
Figure
5:
deduce the bit l 4 , which indicates whether the lower bound L 4 is a possible value of the integral.
If a 1 then the lower bound is obtained with possible for R 4 iff L 3
is possible for R 3 ; so l 4 is the same as l 3 . Otherwise, if a 1 then the lower bound is
obtained with ffi approaching
d , and L 4 is possible iff both l 3 is possible for R 3 ; so
l
We now formally define the edges between bound-labeled regions of the bounds graph B p;R f
(A).
Suppose that the region graph R(A) has an edge from R to R 0 , and let a be the duration measure
of R. Then the bounds graph has an edge from (R; L; l; U; u) to (R iff the bound
expressions
and the bits l, u, l 0 , and u 0 are related as follows. There are various cases to consider, depending
on whether the edge from R to R 0 is a time edge or a transition edge:
Time edge 1 R 0 is a boundary region and is an open region: let 1 - k - n be the
largest index such that R 0 satisfies - x
for all we have a
i+k and b
for all
a
Time edge 2 R is a boundary region and R is an open region:
a
for all
Transition edge 1 R 0 is a boundary region, R is an open region, and the clock with the k-th
smallest fractional part in R is reset:
for all we have a
if a 0
a then a
if a 0
a and a ? 0 then
a and a ? 0 then
Transition edge 2 Both R and R 0 are boundary regions, and the clock with the k-th smallest
fractional part in R is reset:
for all we have a
for all k - i - n, we have a
This case is illustrated in Figure 5.
This completes the definition of the bounds graph B p;R f (A).
Reachability in the bounds graph
Given a state oe = (s; c), two bound expressions L and U , and two bits l and u, we define the
(bounded) interval I(oe; L; l; U; u) of the nonnegative real line as follows: the left endpoint is
the right endpoint is [[U then the interval is left-closed, else it is left-open; if
then the interval is right-closed, else it is right-open. The following lemma states the fundamental
property of the bounds graph B p;R f (A).
A be a timed automaton, let p be a duration measure for A, and let R f be a region
of A. For every state oe of A and every nonnegative real ffi , there is a state - 2 R f such that
in the bounds graph B p;R f
(A), there is path to R f from a bound-labeled region
(R;
Proof. Consider a state oe of A and a nonnegative real ffi. Suppose (oe;
Then, by the definition of the region graph R(A), we have a sequence
of successors of extended states with oe
region graph contains an edge from the region R i+1 containing oe i+1 to the region R i containing
oe i . We claim that there exist bound-labeled regions such that (1) for all
the region component of B i is R i , (2) the bounds graph B p;R f (A) has an edge from B 0 to R f and
from B i+1 to B i for all
This claim is proved by induction on i, using the definition of the edges in
the bounds graph.
Conversely, consider a sequence of bound-labeled regions B such that the bounds graph
has an edge from B 0 to R f and from B i+1 to B i for all
(R We claim that for all
there exists - 2 R f with (oe; This is again proved by induction on i, using the definition
of the edges in the bounds graph. 2
For a bound-labeled region denote the union S
oe2R I(oe; L; l; U; u) of
intervals. It is not difficult to check that the set I(B) is a bounded interval of the nonnegative real
line with integer endpoints. The left endpoint ' of I(B) is the infimum of all choices of
clock valuations c that are consistent with R; that is, Rg. Since all irrelevant
coefficients in the bound expression L are 0, the infimum ' is equal to the smallest nonzero coefficient
in L (the left end-point is 0 if all coefficients are 0). Similarly, the right endpoint of I(B) is the
supremum of [[U all choices of c that are consistent with R, and this supremum is equal
to the largest coefficient in U . The type of the interval I(B) can be determined as follows. Let
ffl If a then I(B) is left-closed, and otherwise I(B) is left-open.
then I(B) is right-closed, and otherwise I(B) is
right-open.
For instance, consider the region R that satisfies
z. Let
is the open interval (1; 5), irrespective of the values of l and u.
A be a timed automaton, let
R
I be a duration constraint for A, and let R 0
be two regions of A. There are two states oe 2 R 0 and - 2 R f and a real number I such that
in the bounds graph B p;R f
(A), there is path to R f from a bound-labeled region B
with region component R 0 and I(B) " I 6= ;.
Hence, to solve the duration-bounded reachability problem
R
we construct the
portion of the bounds graph B p;R f (A) from which the special vertex R f is reachable. This can be
done in a backward breadth-first fashion starting from the final region R f . On a particular path
through the bounds graph, the same region may appear with different bound expressions. Although
there are infinitely many distinct bound expressions, the backward search can be terminated within
finitely many steps, because when the coefficients of the bound expressions become sufficiently large
relative to I , then their actual values become irrelevant. This is shown in the following section.
Collapsing the bounds graph
Given a nonnegative integer constant m, we define an equivalence relation - =m over bound-labeled
regions as follows. For two nonnegative integers a and b, define a - =m b iff either a = b, or both
m. For two bound expressions e = (a
iff for all
. For two bound-labeled regions
iff the following four conditions hold:
2.
3. either l some coefficient in L 1 is greater than m;
4. either some coefficient in U 1 is greater than m.
The following lemma states that the equivalence relation - =m on bound-labeled regions is back
stable.
Lemma 3 If the bounds graph B p;R f (A) contains an edge from a bound-labeled region B 1 to a bound-
labeled region B 0
, then there exists a bound-labeled region B 2 such that B 1
and the bounds graph contains an edge from B 2 to B 0
.
Proof. Consider two bound-labeled regions B 0
2 such that B 0- =m B 0
. Let R 0 be the clock
region of B 0
1 and of B 0
R be a
clock region such that the region graph R(A) has an edge from R to R 0 . Then there is a unique
bound-labeled region such that the bounds graph B p;R f
(A) has an edge
from B 1 to B 0
1 , and there is a unique bound-labeled region such that the
bounds graph has an edge from B 2 to B 0
2 . It remains to be shown that B 1
There are 4 cases to consider according to the rules for edges of the bounds graph. We consider
only the case corresponding to Transition edge 2. This corresponds to the case when R 0 is a
boundary region, R is an open region, and the clock with the k-th smallest fractional part in R is
reset. Let the duration measure be a in R. We will establish that L 1
some coefficient in L 1 is greater than m; the case of upper bounds is similar.
According to the
rule, for all
2 , we have a 0
It follows that for
We have
a
a). We have 4 cases to consider. (i) a
n and
n . Since a 0
n , we have a n
. In this case, l
1 and l
2 . If l 0
2 , we have
boundary region). Each
coefficient a 0
or a j , and thus some coefficient of L 1 also exceeds
m. (ii) a
a. In this case, we have a 0
It follows that all the values a 0
exceed m. Hence, a
and b n ? m. Since at least one coefficient of L 1 is at least m, there is no requirement that l
(indeed, they can be different). The cases (iii) a
n , and (iv) a
a and
a have similar analysis. 2
Since the equivalence relation - =m is back stable, for checking reachability between bound-labeled
regions in the bounds graph B p;R f (A), it suffices to look at the quotient graph [B p;R f
(A)]- =m . The
following lemma indicates a suitable choice for the constant m for solving a duration-bounded
reachability problem.
Lemma 4 Consider two bound-labeled regions B 1 and B 2 and a bounded interval I ' R with integer
endpoints. If B 1
for the right endpoint m of I, then I "
Proof. Consider bound-labeled regions
that
. It is easy to check that the left end-points of I(B 1 ) and I(B 2 ) are either equal or
both exceed m (see the rules for determining the left end-point). We need to show that when the
left end-points are at most m, either both I(B 1 ) and I(B 2 ) are left-open or both are left-closed. If
this is trivially true. Suppose l 1 we know that some coefficient of L 1 and of L 2
exceeds m. Since the left end-point is m or smaller, we know that both L 1 and L 2 have at least
two nonzero coefficients. In this case, both the intervals are left-open irrespective of the bits l 1 and
l 2 . A similar analysis of right end-points shows that either both the right end-points exceed m, or
both are at most m, are equal, and both the intervals are either right-open or right-closed. 2
A bound expression e is m-constrained, for a nonnegative integer m, iff all coefficients in e are
at most m + 1. Clearly, for every bound expression e, there exists a unique m-constrained bound
expression fl(e) such that e - =m fl(e). A bound-labeled region m-constrained
iff (1) both L and U are m-constrained, (2) if some coefficient of L is m+ 1, then l = 0, and (3) if
some coefficient of U is m for every bound-labeled region B, there exists
a unique m-constrained bound-labeled region fl(B) such that B - =m fl(B). Since no two distinct
m-constrained bound-labeled regions are - =m -equivalent, it follows that every - =m -equivalence class
contains precisely one m-constrained bound-labeled region. We use the m-constrained bound-
labeled regions to represent the - =m -equivalence classes.
The number of m-constrained expressions over n clocks is (m+2) n+1 . Hence, for a given region
R, the number of m-constrained bound-labeled regions of the form (R; L; l; U; u) is 4 \Delta (m+2) 2(n+1) .
From the bound on the number of clock regions, we obtain a bound on the number of m-constrained
bound-labeled regions of A, or equivalently, on the number of - =m -equivalence classes of bound-
labeled regions.
Lemma 5 Let A be a timed automaton with location set S and clock set X such that n is the number
of clocks, and no clock x is compared to a constant larger than m x . For every nonnegative integer
m, the number of m-constrained bound-labeled regions of A is at most
Consider the duration-bounded reachability problem
R
be the
right endpoint of the interval I . By Lemma 5, the number of m-constrained bound-labeled regions
is exponential in the length of the problem description. By combining Lemmas 2, 3, and 4, we
obtain the following exponential-time decision procedure for solving the given duration-bounded
reachability problem.
Theorem be the right endpoint of the interval I ' R. The answer to the duration-
bounded reachability problem
R
affirmative iff there exists a finite sequence
of m-constrained bound-labeled regions of A such that
1. the bounds graph B p;R f
(A) contains an edge to R f from some bound-labeled region B with
2. for all the bounds graph B p;R f
(A) contains an edge to B i from some bound-labeled
region B with
3. and the clock region of B k is R 0 .
Hence, when constructing, in a backward breadth-first fashion, the portion of the bounds graph
(A) from which the special vertex R f is reachable, we need to explore only m-constrained
bound-labeled regions. For each m-constrained bound-labeled region B i , we first construct all
predecessors of B i . The number of predecessors of B i is finite, and corresponds to the number
of predecessors of the clock region of B i in the region graph R(A). Each predecessor B of B i
that is not an m-constrained bound-labeled region is replaced by the - =m -equivalent m-constrained
region fl(B). The duration-bounded reachability property holds if a bound-labeled region B with
found. If the search terminates otherwise, by generating no new m-constrained
bound-labeled regions, then the answer to the duration-bounded reachability problem is negative.
The time complexity of the search is proportional to the number of m-constrained bound-labeled
regions, which is given in Lemma 5. The space complexity of the search is Pspace, because the
representation of an m-constrained bound-labeled region and its predecessor computation requires
only space polynomial in the length of the problem description.
Corollary 1 The duration-bounded reachability problem for timed automata can be decided in
Pspace.
The duration-bounded reachability problem for timed automata is Pspace-hard, because already
the (unbounded) reachability problem between clock regions is Pspace-hard [AD94].
We solved the duration-bounded reachability problem between two specified clock regions. Our
construction can be used for solving many related problems. First, it should be clear that the
initial and/or final region can be replaced either by a specific state with rational clock values, or
by a specific location (i.e., a set of clock regions). For instance, suppose that we are given an
initial state oe, a final state - , a duration constraint
R
I , and we are asked to decide whether
I . Assuming oe and - assign rational values to all clocks,
we can choose an appropriate time unit so that the regions [oe] - and [- are singletons. It follows
that the duration-bounded reachability problem between rational states is also solvable in Pspace.
A second example of a duration property we can decide is the following. Given a real-time
system modeled as a timed automaton, and nonnegative integers m, a, and b, we sometimes want
to verify that in every time interval of length m, the system spends at least a and at most b
accumulated time units in a given set of locations. For instance, for a railroad crossing similar to
the one that appears in various papers on real-time verification [AHH96], our algorithm can be
used to check that "in every interval of 1 hour, the gate is closed for at most 5 minutes." The
verification of this duration property, which depends on various gate delays and on the minimum
separation time between consecutive trains, requires the accumulation of the time during which the
gate is closed.
As a third, and final, example, recall the duration-bounded causality property ( ) from the
introduction. Assume that each location of the timed automaton is labeled with atomic propositions
such as q, denoting that the ringer is pressed, and r, denoting the response. The duration measure
is defined so that is a label of s, and otherwise. The labeling of the locations
with atomic propositions is extended to regions and bound-labeled regions. The desired duration-
bounded causality property, then, does not hold iff there is an initial region R 0 , a final region R f
labeled with r, and a bound-labeled region (R ;, and in
the bounds graph B p;R f , there is a path from B to R f that passes only through regions that are
not labeled with r.
The duration-bounded reachability problem has been studied, independently, in [KPSY93] also.
The approach taken there is quite different from ours. First, the problem is solved in the case of
discrete time, where all transitions of a timed automaton occur at integer time values. Next, it is
shown that the cases of discrete (integer-valued) time and dense (real-valued) time have the same
answer, provided the following two conditions are met: (1) the clock constraints of timed automata
use only positive boolean combinations of non-strict inequalities (i.e., inequalities involving - and -
and (2) the duration constraint is one-sided (i.e., it has the form R
N). The first requirement ensures that the runs of a timed automaton are closed under
digitization (i.e., rounding of real-numbered transition times relative to an arbitrary, but fixed
fractional part ffl 2 [0; 1) [HMP92]). The second requirement rules out duration constraints such
as
R
R
3. The approach of proving that the discrete-time and the dense-time
answers to the duration-bounded reachability problem coincide gives a simpler solution than ours,
and it also admits duration measures that assign negative integers to some locations. However, both
requirements (1) and (2) are essential for this approach. We also note that for timed automata
with a single clock, [KPSY93] gives an algorithm for checking more complex duration constraints,
such as R
R
different duration measures p and p 0 .
Instead of equipping timed automata with duration measures, a more general approach extends
timed automata with variables that measure accumulated durations. Such variables, which are
called integrators or stop watches, may advance in any given location either with time derivative
1 (like a clock) or with time derivative 0 (not changing in value). Like clocks, integrators can be
reset with transitions of the automaton, and the constraints guarding the automaton transitions
can test integrator values. The reachability problem between the locations of a timed automaton
with integrators, however, is undecidable [ACH single integrator can
cause undecidability [HKPV95]. Still, in many cases of practical interest, the reachability problem
for timed automata with integrators can be answered by a symbolic execution of the automaton
In contrast to the use of integrators, whose real-numbered values are part of the automaton state,
we achieved decidability by separating duration constraints from the system and treating them as
properties. This distinction between strengthening the model and strengthening the specification
language with the duration constraints is essential for the decidability of the resulting verification
problem. The expressiveness of specification languages can be increased further. For example,
it is possible to define temporal logics with duration constraints or integrators. The decidability
of the model-checking problem for such logics remains an open problem. For model checking a
given formula, we need to compute the characteristic set, which contains the states that satisfy the
formula. In particular, given an initial region R 0 , a final state - , and a duration constraint
R
we need to compute the set Q 0 ' R 0 of states oe 2 R 0 for which there exists a real number
such that (oe; Each bound-labeled region (R u) from which R f is reachable in
the bounds graph B p;R f contributes the subset foe 2 R 0 j I(oe; L; l; U; u) " I 6= ;g to Q 0 . In general,
there are infinitely many such contributions, possibly all singletons, and we know of no description
of Q 0 that can be used to decide the model-checking problem. By contrast, over discrete time, the
characteristic sets for formulas with integrators can be computed [BES93]. Also, over dense time,
the characteristic sets can be approximated symbolically [AHH96].
Acknowledgements
. We thank Sergio Yovine for a careful reading of the manuscript.
--R
Model checking in dense real time.
A theory of timed automata.
The benefits of relaxing punctuality.
Logics and models of real time: a survey.
Automatic symbolic verification of embedded systems.
On model checking for real-time properties with durations
Design and synthesis of synchronization skeletons using branching-time temporal logic
Decidability of bisimulation equivalence for parallel timer processes.
A calculus of durations.
Minimum and maximum delay problems in real-time systems
Timing assumptions and verification of finite-state concurrent systems
What's decidable about hybrid automata?
What good are digital clocks?
Symbolic model checking for real-time systems
Integration graphs: a class of decidable hybrid systems.
Specification and verification of concurrent systems in CESAR.
--TR
--CTR
Nicolas Markey , Jean-Franois Raskin, Model checking restricted sets of timed paths, Theoretical Computer Science, v.358 n.2, p.273-292, 7 August 2006
Yasmina Abdeddam , Eugene Asarin , Oded Maler, Scheduling with timed automata, Theoretical Computer Science, v.354 n.2, p.272-300, 28 March 2006 | real-time systems;duration properties;formal verification;model checking |
262521 | Compile-Time Scheduling of Dynamic Constructs in Dataflow Program Graphs. | AbstractScheduling dataflow graphs onto processors consists of assigning actors to processors, ordering their execution within the processors, and specifying their firing time. While all scheduling decisions can be made at runtime, the overhead is excessive for most real systems. To reduce this overhead, compile-time decisions can be made for assigning and/or ordering actors on processors. Compile-time decisions are based on known profiles available for each actor at compile time. The profile of an actor is the information necessary for scheduling, such as the execution time and the communication patterns. However, a dynamic construct within a macro actor, such as a conditional and a data-dependent iteration, makes the profile of the actor unpredictable at compile time. For those constructs, we propose to assume some profile at compile-time and define a cost to be minimized when deciding on the profile under the assumption that the runtime statistics are available at compile-time. Our decisions on the profiles of dynamic constructs are shown to be optimal under some bold assumptions, and expected to be near-optimal in most cases. The proposed scheduling technique has been implemented as one of the rapid prototyping facilities in Ptolemy. This paper presents the preliminary results on the performance with synthetic examples. | Introduction
A D ataflow graph representation, either as a programming
language or as an intermediate representation
during compilation, is suitable for programming multiprocessors
because parallelism can be extracted automatically
from the representation [1], [2] Each node, or actor, in a
dataflow graph represents either an individual program instruction
or a group thereof to be executed according to
the precedence constraints represented by arcs, which also
represent the flow of data. A dataflow graph is usually
made hierarchical. In a hierarchical graph, an actor itself
may represent another dataflow graph: it is called a macro
actor.
Particularly, we define a data-dependent macro actor, or
data-dependent actor, as a macro actor of which the execution
sequence of the internal dataflow graph is data dependent
(cannot be predicted at compile time). Some examples
are macro actors that contain dynamic constructs such as
data-dependent iteration, and recursion. Actors
are said to be data-independent if not data-dependent.
The scheduling task consists of assigning actors to pro-
cessors, specifying the order in which actors are executed on
each processor, and specifying the time at which they are
S. Ha is with the Department of Computer Engineering, Seoul National
University, Seoul, 151-742, Korea. e-mail: sha@comp.snu.ac.kr
E. Lee is with the Department of Electrical Engineering and Computer
Science, University of California at Berkeley, Berkeley, CA
94720, USA. e-mail: eal@ohm.eecs.berkeley.edu
executed. These tasks can be performed either at compile
time or at run time [3]. In the fully-dynamic scheduling,
all scheduling decisions are made at run time. It has the
flexibility to balance the computational load of processors
in response to changing conditions in the program. In case
a program has a large amount of non-deterministic behav-
ior, any static assignment of actors may result in very poor
load balancing or poor scheduling performance. Then, the
fully dynamic scheduling would be desirable. However, the
run-time overhead may be excessive; for example it may
be necessary to monitor the computational loads of processors
and ship the program code between processors via
networks at run time. Furthermore, it is not usually practical
to make globally optimal scheduling decision at run
time.
In this paper, we focus on the applications with a moderate
amount of non-deterministic behavior such as DSP
applications and graphics applications. Then, the more
scheduling decisions are made at compile time the better
in order to reduce the implementation costs and to make
it possible to reliably meet any timing constraints.
While compile-time processor scheduling has a very rich
and distinguished history [4], [5], most efforts have been
focused on deterministic models: the execution time of
each actor T i on a processor P k is fixed and there are no
data-dependent actors in the program graph. Even in this
restricted domain of applications, algorithms that accomplish
an optimal scheduling have combinatorial complexity,
except in certain trivial cases. Therefore, good heuristic
methods have been developed over the years [4], [6], [7],
[8]. Also, most of the scheduling techniques are applied to
a completely expanded dataflow graph and assume that an
actor is assigned to a processor as an indivisible unit. It
is simpler, however, to treat a data-dependent actor as a
schedulable indivisible unit. Regarding a macro actor as
a schedulable unit greatly simplifies the scheduling task.
Prasanna et al [9] schedule the macro dataflow graphs hierarchically
to treat macro actors of matrix operations as
schedulable units. Then, a macro actor may be assigned to
more than one processor. Therefore, new scheduling techniques
to treat a macro actor as a schedulable unit was
devised.
Compile-time scheduling assumes that static information
about each actor is known. We define the profile of
an actor as the static information about the actor necessary
for a given scheduling technique. For example, if we
use a list scheduling technique, the profile of an actor is
simply the computation time of the actor on a processor.
The communication requirements of an actor with other
actors are included in the profile if the scheduling tech-
HA AND LEE: COMPILE-TIME SCHEDULING OF DYNAMIC CONSTRUCTS IN DATAFLOW PROGRAM GRAPHS 769
nique requires that information. The profile of a macro
actor would be the number of the assigned processors and
the local schedule of the actor on the assigned processors.
For a data-independent macro actor such as a matrix op-
eration, the profile is deterministic. However, the profile
of a data-dependent actor of dynamic construct cannot be
determined at compile time since the execution sequence
of the internal dataflow subgraph varies at run time. For
those constructs, we have to assume the profiles somehow
at compile-time.
The main purpose of this paper is to show how we can
define the profiles of dynamic constructs at compile-time.
A crucial assumption we rely on is that we can approximate
the runtime statistics of the dynamic behavior at compile-
time. Simulation may be a proper method to gather these
statistics if the program is to be run on an embedded DSP
system. Sometimes, the runtime statistics could be given
by the programmer for graphics applications or scientific
applications.
By optimally choosing the profile of the dynamic con-
structs, we will minimize the expected schedule length of a
program assuming the quasi-static scheduling. In figure 1,
actor A is a data-dependent actor. The scheduling result is
shown with a gantt chart, in which the horizontal axis indicates
the scheduling time and the vertical axis indicates
the processors. At compile time, the profile of actor A is
assumed. At run time, the schedule length of the program
varies depending on the actual behavior of actor A. Note
that the pattern of processor availability before actor B
starts execution is preserved at run time by inserting idle
time. Then, after actor A is executed, the remaining static
schedule can be followed. This scheduling strategy is called
quasi-static scheduling that was first proposed by Lee [10]
for DSP applications. The strict application of the quasi-static
scheduling requires that the synchronization between
actors is guaranteed at compile time so that no run-time
synchronization is necessary as long as the pattern of processor
availability is consistent with the scheduled one. It
is generally impractical to assume that the exact run-time
behaviors of actors are known at compile time. Therefore,
synchronization between actors is usually performed at run
time. In this case, it is not necessary to enforce the pattern
of processor availability by inserting idle time. Instead, idle
time will be inserted when synchronization is required to
execute actors. When the execution order of the actors is
not changed from the scheduled order, the actual schedule
length obtained from run-time synchronization is proven to
be not much different from what the quasi-static scheduling
would produce [3]. Hence, our optimality criterion for the
profile of dynamic constructs is based on the quasi-static
scheduling strategy, which makes analysis simpler.
II. Previous Work
All of the deterministic scheduling heuristics assume that
static information about the actors is known. But almost
none have addressed how to define the static information
of data-dependent actors. The pioneering work on this issue
was done by Martin and Estrin [11]. They calculated
A
A
A
(b)
(c) (d)
(a)
Fig. 1. (a) A dataflow graph consists of five actors among which actor
A is a data-dependent actor. (b) Gantt chart for compile-time
scheduling assuming a certain execution time for actor A. (c) At
run time, if actor A takes longer, the second processor is padded
with no-ops and (d) if actor A takes less, the first processor is
idled to make the pattern of processor availability same as the
scheduled one (dark line) in the quasi-static scheduling.
the mean path length from each actor to a dummy terminal
actor as the level of the actor for list scheduling. For exam-
ple, if there are two possible paths divided by a conditional
construct from an actor to the dummy terminal actor, the
level of the actor is a sum of the path lengths weighted by
the probability with which the path is taken. Thus, the
levels of actors are based on statistical distribution of dynamic
behavior of data-dependent actors. Since this is expensive
to compute, the mean execution times instead are
usually used as the static information of data-dependent
actors [12]. Even though the mean execution time seems a
reasonable choice, it is by no means optimal. In addition,
both approaches have the common drawback that a data-dependent
actor is assigned to a single processor, which is
a severe limitation for a multiprocessor system.
Two groups of researchers have proposed quasi-static
scheduling techniques independently: Lee [10] and Loeffler
et al [13]. They developed methods to schedule conditional
and data-dependent iteration constructs respectively. Both
approaches allow more than one processor to be assigned to
dynamic constructs. Figure 2 shows a conditional and compares
three scheduling methods. In figure 2 (b), the local
schedules of both branches are shown, where two branches
are scheduled on three processors while the total
number of processors is 4
In Lee's method, we overlap the local schedules of both
branches and choose the maximum termination for each
processor. For hard real-time systems, it is the proper
choice. Otherwise, it may be inefficient if either one branch
is more likely to be taken and the size of the likely branch
is much smaller. On the other hand, Loeffler takes the
local schedule of more likely branch as the profile of the
conditional. This strategy is inefficient if both branches
are equally likely to be taken and the size of the assumed
branch is much larger. Finally, a conditional evaluation can
be replaced with a conditional assignment to make the construct
static; the graph is modified as illustrated in figure
(c). In this scheme, both true and false branches are sched-
770 IEEE TRANSACTIONS ON COMPUTERS, VOL. 46, NO. 7, JULY 1997
A
(a) if-then-else construct
(b) local schedule of two
(d) Lee's method (e) Leoffler's method
(c) a fully-static schedule
Fig. 2. Three different schedules of a conditional construct. (a) An
example of a conditional construct that forms a data-dependent
actor as a whole. (b) Local deterministic schedules of the two
branches. (c) A static schedule by modifying the graph to use
conditional assignment. (d) Lee's method to overlap the local
schedules of both branches and to choose the maximum for each
processor. (e) Loeffler's method to take the local schedule of the
branch which is more likely to be executed.
uled and the result from one branch is selected depending
on the control boolean. An immediate drawback is inefficiency
which becomes severe when one of the two branches
is not a small actor. Another problem occurs when the
unselected branch generates an exception condition such
as divide-by-zero error. All these methods on conditionals
are ad-hoc and not appropriate as a general solution.
Quasi-static scheduling is very effective for a data-dependent
iteration construct if the construct can make
effective use of all processors in each cycle of the iteration.
It schedules one iteration and pads with no-ops to make
the pattern of processor availability at the termination the
same as the pattern of the start (figure (Equivalently,
all processors are occupied for the same amount of time
in each iteration). Then, the pattern of processor availability
after the iteration construct is independent of the
number of iteration cycles. This scheme breaks down if the
construct cannot utilize all processors effectively.
one iteration cycle
Fig. 3. A quasi-static scheduling of a data-dependent iteration con-
struct. The pattern of processor availability is independent of the
number of iteration cycles.
The recursion construct has not yet been treated successfully
in any statically scheduled data flow paradigm.
Recently, a proper representation of the recursion construct
has been proposed [14]. But, it is not explained
how to schedule the recursion construct onto multiproces-
sors. With finite resources, careless exploitation of the parallelism
of the recursion construct may cause the system to
deadlock.
In summary, dynamic constructs such as conditionals,
data-dependent iterations, and recursions, have not been
treated properly in past scheduling efforts, either for static
scheduling or dynamic scheduling. Some ad-hoc methods
have been introduced but proven unsuitable as general so-
lutions. Our earlier result with data-dependent iteration [3]
demonstrated that a systematic approach to determine the
profile of data-dependent iteration actor could minimize
the expected schedule length. In this paper, we extend our
analysis to general dynamic constructs.
In the next section, we will show how dynamic constructs
are assigned their profiles at compile-time. We also prove
the given profiles are optimal under some unrealistic as-
sumptions. Our experiments enable us to expect that our
decisions are near-optimal in most cases. Section 4,5 and 6
contains an example with data-dependent iteration, recur-
sion, and conditionals respectively to show how the profiles
of dynamic constructs can be determined with known
runtime statistics. We implement our technique in the
Ptolemy framework [15]. The preliminary simulation results
will be discussed in section 7. Finally, we discuss the
limits of our method and mention the future work.
III. Compile-Time Profile of Dynamic
Constructs
Each actor should be assigned its compile-time profile
for static scheduling. Assuming a quasi-static scheduling
strategy, the proposed scheme is to decide the profile of a
construct so that the average schedule length is minimized
assuming that all actors except the dynamic construct are
data-independent. This objective is not suitable for a hard
real-time system as it does not bound the worst case be-
havior. We also assume that all dynamic constructs are
uncorrelated. With this assumption, we may isolate the effect
of each dynamic construct on the schedule length sep-
arately. In case there are inter-dependent actors, we may
group those actors as another macro actor, and decide the
optimal profile of the large actor. Even though the decision
of the profile of the new macro actor would be complicated
in this case, the approach is still valid. For nested dynamic
constructs, we apply the proposed scheme from the inner
dynamic construct first. For simplicity, all examples in this
paper will have only one dynamic construct in the dataflow
graph.
The run-time cost of an actor i, C i , is the sum of the total
computation time devoted to the actor and the idle time
due to the quasi-static scheduling strategy over all proces-
sors. In figure 1, the run-time cost of a data-dependent
actor A is the sum of the lightly (computation time) and
darkly shaded areas after actor A or C (immediate idle
time after the dynamic construct). The schedule length of
a certain iteration can be written as
schedule
where T is the total number of processors in the system,
and R is the rest of the computation including all idle time
that may result both within the schedule and at the end.
Therefore, we can minimize the expected schedule length
by minimizing the expected cost of the data-dependent acHA
AND LEE: COMPILE-TIME SCHEDULING OF DYNAMIC CONSTRUCTS IN DATAFLOW PROGRAM GRAPHS 771
tor or dynamic construct if we assume that R is independent
of our decisions for the profile of actor i. This assumption
is unreasonable when precedence constraints make R
dependent on our choice of profile. Consider, for example,
a situation where the dynamic construct is always on the
critical path and there are more processors than we can
effectively use. Then, our decision on the profile of the
construct will directly affect the idle time at the end of
the schedule, which is included in R. On the other hand,
if there is enough parallelism to make effective use of the
unassigned processors and the execution times of all actors
are small relative to the schedule length, the assumption is
valid. Realistic situations are likely to fall between these
two extremes.
To select the optimal compile-time profile of actor i, we
assume that the statistics of the runtime behavior is known
at compile-time. The validity of this assumption varies to
large extent depending on the application. In digital signal
processing applications where a given program is repeatedly
executed with data stream, simulation can be useful
to obtain the necessary information. In general, however,
we may use a well-known distribution, for example uniform
or geometric distribution, which makes the analysis simple.
Using the statistical information, we choose the profile to
give the least expected cost at runtime as the compile-time
profile.
The profile of a data-dependent actor is a local schedule
which determines the number of assigned processors and
computation times taken on the assigned processors. The
overall algorithm of profile decision is as follows. We assume
that the dynamic behavior of actor i is expressed with
parameter k and its distribution p(k).
// T is the total number of processors.
// N is the number of processors assigned to the actor.
for
// A(N,k) is the actor cost with parameter N, k
// p(k) is the probability of parameter k
In the next section, we will illustrate the proposed
scheme with data-dependent iteration, recursion, and conditionals
respectively to show how profiles are decided with
runtime statistics.
IV. Data Dependent Iteration
In a data-dependent iteration, the number of iteration
cycles is determined at runtime and cannot be known at
compile-time. Two possible dataflow representations for
data-dependent iteration are shown in figure 4 [10].
The numbers adjacent to the arcs indicate the number
of tokens produced or consumed when an actor fires [2]. In
figure 4 (a), since the upsample actor produces M tokens
each time it fires, and the iteration body consumes only one
token when it fires, the iteration body must fire M times
for each firing of the upsample actor. In figure 4 (b), the
f
Iteration
body
source1
of M M
(a) (b)
Fig. 4. Data-dependent iteration can be represented using the either
of the dataflow graphs shown. The graph in (a) is used when the
number of iterations is known prior to the commencement of the
iteration, and (b) is used otherwise.
number of iterations need not be known prior to the commencement
of the iteration. Here, a token coming in from
above is routed through a "select" actor into the iteration
body. The "D" on the arc connected to the control input
of the "select" actor indicates an initial token on that arc
with value "false". This ensures that the data coming into
the "F" input will be consumed the first time the "select"
actor fires. After this first input token is consumed, the
control input to the "select" actor will have value "true"
until function t() indicates that the iteration is finished by
producing a token with value "false". During the itera-
tion, the output of the iteration function f() will be routed
around by the "switch" actor, again until the test function
t() produces a token with value "false". There are many
variations on these two basic models for data-dependent
iteration.
The previous work [3] considered a subset of data-dependent
iterations, in which simultaneous execution of
successive cycles is prohibited as in figure 4 (b). In figure 4
(a), there is no such restriction, unless the iteration body
itself contains a recurrence. Therefore, we generalize the
previous method to permit overlapped cycles when successive
iteration cycles are invokable before the completion of
an iteration cycle. Detection of the intercycle dependency
from a sequential language is the main task of the parallel
compiler to maximize the parallelism. A dataflow represen-
tation, however, reveals the dependency rather easily with
the presence of delay on a feedback arc.
We assume that the probability distribution of the number
of iteration cycles is known or can be approximated
at compile time. Let the number of iteration cycles be a
random variable I with known probability mass function
p(i). For simplicity, we set the minimum possible value of
I to be 0. We let the number of assigned processors be
N and the total number of processors be T . We assume a
blocked schedule as the local schedule of the iteration body
to remove the unnecessary complexity in all illustrations,
although the proposed technique can be applicable to the
overlap execution schedule [16]. In a blocked schedule, all
assigned processors are assumed to be available, or synchronized
at the beginning. Thus, the execution time of
one iteration cycle with N assigned processors is t N as displayed
in figure 5 (a). We denote by s N the time that must
772 IEEE TRANSACTIONS ON COMPUTERS, VOL. 46, NO. 7, JULY 1997
elapse in one iteration before the next iteration is enabled.
This time could be zero, if there is no data dependency
between iterations. Given the local schedule of one iteration
cycle, we decide on the assumed number of iteration
cycles, xN , and the number of overlapped cycles kN . Once
the two parameters, xN and kN , are chosen, the profile of
the data-dependent iteration actor is determined as shown
in figure 5 (b). The subscript N of t N , s N , xN and kN
represents that they are functions of N , the number of the
assigned processors. For brevity, we will omit the subscript
N for the variables without confusion. Using this profile of
the data-dependent macro actor, global scheduling is performed
to make a hierarchical compilation. Note that the
pattern of processor availability after execution of the construct
is different from that before execution. We do not
address how to schedule the iteration body in this paper
since it is the standard problem of static scheduling.2 x2 x
next iteration cycle is executable
s
(a)
Fig. 5. (a) A blocked schedule of one iteration cycle of a data-dependent
iteration actor. A quasi-static schedule is constructed
using a fixed assumed number x of cycles in the iteration. The
cost of the actor is the sum of the dotted area (execution time)
and the dark area (idle time due to the iteration). There displays
3 possible cases depending on the actual number of cycles i in (b)
According to the quasi-static scheduling policy, three
cases can happen at runtime. If the actual number of cycles
coincides with the assumed number of iteration cycles,
the iteration actor causes no idle time and the cost of the
actor consists only of the execution time of the actor. Oth-
erwise, some of the assigned processors will be idled if the
iteration takes fewer than x cycles (figure 5 (c)), or else the
other processors as well will be idled (figure 5 (d)). The
expected cost of the iteration actor, C(N; k; x), is a sum
of the individual costs weighted by the probability mass
function of the number of iteration cycles. The expected
cost becomes
x
p(i)Ntx +X
(2)
By combining the first term with the first element of the
second term, this reduces to
e: (3)
Our method is to choose three parameters (N , k, and
x) in order to minimize the expected cost in equation (3).
First, we assume that N is fixed. Since C(N; k; x) is a decreasing
function of k with fixed N , we select the maximum
possible number for k. The number k is bounded by two
ratios: T
N and t
s . The latter constraint is necessary to avoid
any idle time between iteration cycles on a processor. As
a result, k is set to be
The next step is to determine the optimal x. If a value x is
optimal, the expected cost is not decreased if we vary x by
or \Gamma1. Therefore, we obtain the following inequalities,
Since t is positive, from inequality (5),X
If k is equal to 1, the above inequality becomes same as
inequality (5) in [3], which shows that the previous work is
a special case of this more general method.
Up to now, we decided the optimal value for x and k for
a given number N . How to choose optimal N is the next
question we have to answer. Since t is not a simple function
of N , no closed form for N minimizing C(N; k; x) exists,
unfortunately. However, we may use exhaustive search
through all possible values N and select the value minimizing
the cost in polynomial time. Moreover, our experiments
show that the search space for N is often reduced
significantly using some criteria.
Our experiments show that the method is relatively insensitive
to the approximated probability mass function for
Using some well-known distributions which have nice
mathematical properties for the approximation, we can reduce
the summation terms in (3) and (6) to closed forms.
Let us consider a geometric probability mass function with
parameter q as the approximated distribution of the number
of iteration cycles. This models a class of asymmetric
bell-shaped distributions. The geometric probability mass
function means that for any non-negative integer r,
To use inequality (6), we findX
HA AND LEE: COMPILE-TIME SCHEDULING OF DYNAMIC CONSTRUCTS IN DATAFLOW PROGRAM GRAPHS 773
Therefore, from the inequality (6), the optimal value of x
satisfies
Using floor notation, we can obtain the closed form for the
optimal value as follows:
Furthermore, equation (3) is simplified by using the factX
getting
Now, we have all simplified formulas for the optimal profile
of the iteration actor. Similar simplification is possible
also with uniform distributions [17]. If k equals to 1, our
results coincide with the previous result reported in [3].
V. Recursion
Recursion is a construct which instantiates itself as a part
of the computation if some termination condition is not sat-
isfied. Most high level programming languages support this
construct since it makes a program compact and easy to
understand. However, the number of self-instantiations,
called the depth of recursion, is usually not known at
compile-time since the termination condition is calculated
at run-time. In the dataflow paradigm, recursion can be
represented as a macro actor that contains a SELF actor
(figure 6). A SELF actor simply represents an instance of
a subgraph within which it sits.
If the recursion actor has only one SELF actor, the function
of the actor can be identically represented by a data-dependent
iteration actor as shown in figure 4 (b) in the
previous section. This includes as a special case all tail recursive
constructs. Accordingly, the scheduling decision for
the recursion actor will be same as that of the translated
data-dependent iteration actor. In a generalized recursion
construct, we may have more than one SELF actor. The
number of SELF actors in a recursion construct is called
the width of the recursion. In most real applications, the
width of the recursion is no more than two. A recursion
construct with width 2 and depth 2 is illustrated in figure 6
(b) and (c). We assume that all nodes of the same depth in
the computation tree have the same termination condition.
We will discuss the limitation of this assumption later. We
also assume that the run-time probability mass function of
the depth of the recursion is known or can be approximated
at compile-time.
The potential parallelism of the computation tree of
a generalized recursion construct may be huge, since all
nodes at the same depth can be executed concurrently.
The maximum degree of parallelism, however, is usually
not known at compile-time. When we exploit the parallelism
of the construct, we should consider the resource
limitations. We may have to restrict the parallelism in order
not to deadlock the system. Restricting the parallelism
in case the maximum degree of parallelism is too large has
been recognized as a difficult problem to be solved in a dynamic
dataflow system. Our approach proposes an efficient
solution by taking the degree of parallelism as an additional
component of the profile of the recursion construct.
Suppose that the width of the recursion construct is k.
Let the depth of the recursion be a random variable I with
known probability mass function p(i). We denote the degree
of parallelism by d, which means that the descendents
at depth d in the computation graph are assigned to different
processor groups. A descendent recursion construct
at depth d is called a ground construct (figure 7 (a)). If
we denote the size of each processor group by N , the total
number of processors devoted to the recursion becomes
Nk d . Then, the profile of a recursion construct is defined
by three parameters: the assumed depth of recursion x, the
degree of parallelism d, and the size of a processor group
N . Our approach optimizes the parameters to minimize
the expected cost of the recursion construct. An example
of the profile of a recursion construct is displayed in figure 7
(b).
Let - be the sum of the execution times of actors a,c,
and h in figure 6. And, let - o be the sum of the execution
times of actors a and b. Then, the schedule length l x of a
ground construct becomes
l
when x - d. At run time, some processors will be idled if
the actual depth of recursion is different from the assumed
depth of recursion, which is illustrated in figure 7 (c) and
(d). When the actual depth of recursion i is smaller than
the assumed depth x, the assigned processors are idled.
Otherwise, the other processors as well are idled. Let R be
the sum of the execution times of the recursion besides the
ground constructs. This basic cost R is equal to N-(k d \Gamma1)
.
For the runtime cost, C 1 , becomes
assuming that x is not less than d. For i ? x, the cost C 2
becomes
Therefore, the expected cost of the recursion construct,
d) is the sum of the run-time cost weighted by the
probability mass function.
x
774 IEEE TRANSACTIONS ON COMPUTERS, VOL. 46, NO. 7, JULY 1997
a,c a,c
a,c
a a a a
a
test
f
(b) (c)
depth112
SELF actor
function f(x)
if test(x) is TRUE
else
return
(a)
return h(f(c1(x)),f(c2(x)));
c
Fig. 6. (a) An example of a recursion construct and (b) its dataflow representation. The SELF actor represents the recursive call. (c) The
computation tree of a recursion construct with two SELF actors when the depth of the recursion is two.
a,c a,c
a,c24
(b)
Nk d
l x
a,c
a,c a,c
construct
ground
(a)
a,c a,c
a,c
(c)
a,c a,c
a,c
(d)
Fig. 7. (a) The reduced computation graph of a recursion construct of width 2 when the degree of parallelism is 2. (b) The profile of the
recursion construct. The schedule length of the ground construct is a function of the assumed depth of recursion x and the degree of
parallelism d. A quasi-static schedule is constructed depending on the actual depth i of the recursion in (c) for i ! x and in (d) for i ? x.
After a few manipulations,
First, we assume that N is fixed. Since the expected
cost is a decreasing function of d, we select the maximum
possible number for d. The number d is bounded by the
processor constraint: Nk d - T . Since we assume that the
assumed depth of recursion x is greater than the degree of
parallelism d, the optimal value for d is
Next, we decide the optimal value for x from the observation
that if x is optimal, the expected cost is not decreased
when x is varied by +1 and \Gamma1. Therefore, we get
HA AND LEE: COMPILE-TIME SCHEDULING OF DYNAMIC CONSTRUCTS IN DATAFLOW PROGRAM GRAPHS 775
Rearranging the inequalities, we get the following,X
Nk d
Note the similarity of inequality (20) with that for data-dependent
iterations (6). In particular, if k is 1, the two
formulas are equivalent as expected. The optimal values
d and x depend on each other as shown in (18) and (20).
We may need to use iterative computations to obtain the
optimal values of d and x starting from
Let us consider an example in which the probability mass
function for the depth of the recursion is geometric with
parameter q. At each execution of depth i of the recursion,
we proceed to depth
to depth From the inequality
(20), the optimal x satisfies
Nk d
As a result, x becomes
Nk d
Up to now, we assume that N is fixed. Since - is a transcendental
function of N , the dependency of the expected
cost upon the size of a processor group N is not clear. In-
stead, we examine the all possible values for N , calculate
the expected cost from equation (3) and choose the optimal
N giving the minimum cost. The complexity of this
procedure is still polynomial and usually reduced significantly
since the search space of N can be reduced by some
criteria. In case of geometric distribution for the depth of
the recursion, the expected cost is simplified to
In case the number of child functions is one our
simplified formulas with a geometric distribution coincide
with those for data-dependent iterations, except for an
overhead term to detect the loop termination.
Recall that our analysis is based on the assumption that
all nodes of the same depth in the computation tree have
the same termination condition. This assumption roughly
approximates a more realistic assumption, which we call
the independence assumption, that all nodes of the same
depth have equal probability of terminating the recursion,
and that they are independent each other. This equal probability
is considered as the probability that all nodes of
the same depth terminate the recursion in our assumption.
The expected number of nodes at a certain depth is the
same in both assumptions even though they describe different
behaviors. Under the independence assumption, the
shape of the profile would be the same as shown in figure 7:
the degree of parallelism d is maximized. Moreover, all recursion
processors have the same schedule length for the
ground constructs. However, the optimal schedule length
l x of the ground construct would be different. The length
l x is proportional to the number of executions of the recursion
constructs inside a ground construct. This number can
be any integer under the independence assumptions, while
it belongs to a subset f0; our assumption.
Since the probability mass function for this number is likely
to be too complicated under the independence assumption,
we sacrifice performance by choosing a sub-optimal schedule
length under a simpler assumption.
VI. Conditionals
Decision making capability is an indispensable requirement
of a programming language for general applications,
and even for signal processing applications. A dataflow
representation for an if-then-else and the local schedules of
both branches are shown in figure 2 (a) and (b).
We assume that the probability p 1 with which the
"TRUE" branch (branch 1) is selected is known. The
"FALSE" branch (branch 2) is selected with probability
ij be the finishing time of the local schedule
of the i-th branch on the j-th processor. And let - t j
be the finishing time on the j-th processor in the optimal
profile of the conditional construct. We determine the optimal
values f - t j g to minimize the expected runtime cost of
the construct. When the i-th branch is selected, the cost
becomes
Therefore, the expected cost C(N) is
It is not feasible to obtain the closed form solutions for - t j
because the max function is non-linear and discontinuous.
Instead, a numerical algorithm is developed.
1. Initially, take the maximum finish time of both branch
schedules for each processor according to Lee's method
[10].
2. Define ff Initially, all
The variable ff i represents the excessive cost
per processor over the expected cost when branch i is
selected at run time. We define the bottleneck processors
of branch i as the processors fjg that satisfy the
. For all branches fig, repeat the
next step.
3. Choose the set of bottleneck processors, \Theta i , of branch
only. If we decrease - t j by ffi for all j 2 \Theta i , the
variation of the expected cost becomes \DeltaC
until the set \Theta i needs to be
updated. Update \Theta i and repeat step 3.
Now, we consider the example shown in figure 2. Suppose
0:7. The initial profile in our algorithm
is same as Lee's profile as shown in figure 8 (a), which
776 IEEE TRANSACTIONS ON COMPUTERS, VOL. 46, NO. 7, JULY 1997
happens to be same as Loeffler's profile in this specific ex-
ample. The optimal profile determined by our algorithm is
displayed in figure 8 (b).1
(a) initial profile (b) optimal profile
Fig. 8. Generation of the optimal profile for the conditional construct
in figure 2. (a) initial profile (b) optimal profile.
We generalized the proposed algorithm to the M-way
branch construct by case construct. To realize an M-way
branch, we prefer to using case construct to using a nested
if-then-else constructs. Generalization of the proposed algorithm
and proof of optimality is beyond the scope of this
paper. For the detailed discussion, refer to [17]. For a given
number of assigned processors, the proposed algorithm determines
the optimal profile. To obtain the optimal number
of assigned processors, we compute the total expected cost
for each number and choose the minimum.
VII. An Example
The proposed technique to schedule data-dependent actors
has been implemented in Ptolemy, which is a heterogeneous
simulation and prototyping environment being
developed in U.C.Berkeley, U.S.A. [15]. One of the key
objectives of Ptolemy is to allow many different computational
models to coexist in the same system. A domain is
a set of blocks that obey a common computational model.
An example of mixed domain system is shown in figure 9.
The synchronous dataflow (SDF) domain contains all data-independent
actors and performs compile-time scheduling.
Two branches of the conditional constructs are represented
as SDF subsystems, so their local schedules are generated
by a static scheduler. Using the local schedules of both
branches, the dynamic dataflow(DDF) domain executes the
proposed algorithm to obtain the optimal profile of the conditional
construct. The topmost SDF domain system regards
the DDF domain as a macro actor with the assumed
profile when it performs the global static scheduling.
DDF
Fig. 9. An example of mixed domain system. The topmost level of
the system is a SDF domain. A dynamic construct(if-then-else)
is in the DDF domain, which in turn contains two subsystems in
the SDF domain for its branches.
We apply the proposed scheduling technique to several
synthetic examples as preliminary experiments. These experiments
do not serve as a full test or proof of generality
of the technique. However, they verify that the proposed
technique can make better scheduling decisions than other
simple but ad-hoc decisions on dynamic constructs in many
applications. The target architecture is assumed to be a
shared bus architecture with 4 processors, in which communication
can be overlapped with computation.
To test the effectiveness of the proposed technique, we
compare it with the following scheduling alternatives for
the dynamic constructs.
Method 1. Assign all processors to each dynamic con-
struct
Method 2. Assign only one processor to each dynamic
construct
Method 3. Apply a fully dynamic scheduling ignoring
all overhead
Method 4. Apply a fully static scheduling
Method 1 corresponds to the previous research on quasi-static
scheduling technique made by Lee [10] and by Loeffler
et. al. [13] for data dependent iterations. Method 2 approximately
models the situation when we implement each
dynamic construct as a single big atomic actor. To simulate
the third model, we list all possible outcomes, each of which
can be represented as a data-independent macro actor.
With each possible outcome, we replace the dynamic construct
with a data-independent macro actor and perform
fully-static scheduling. The scheduling result from Method
3 is non-realistic since it ignores all the overhead of the
fully dynamic scheduling strategy. Nonetheless, it will give
a yardstick to measure the relative performance of other
scheduling decisions. By modifying the dataflow graphs,
we may use fully static scheduling in Method 4. For a conditional
construct, we may evaluate both branches and select
one by a multiplexor actor. For a data-dependent iteration
construct, we always perform the worst case number
of iterations. For comparison, we use the average schedule
length of the program as the performance measure.
As an example, consider a For construct of data-dependent
iteration as shown in figure 10. The number
inside each actor represents the execution length. To increase
the parallelism, we pipelined the graph at the beginning
of the For construct.
The scheduling decisions to be made for the For construct
are how many processors to be assigned to the iteration
body and how many iteration cycles to be scheduled
explicitly. We assume that the number of iteration cycles
is uniformly distributed between 1 and 7. To determine the
optimal number of assigned processors, we compare the expected
total cost as shown in table I. Since the iteration
body can utilize two processors effectively, the expected
total cost of the first two columns are very close. How-
ever, the schedule determines that assigning one processor
is slightly better. Rather than parallelizing the iteration
body, the scheduler automatically parallelizes the iteration
cycles. If we change the parameters, we may want to parallelize
the iteration body first and the iteration cycles next.
HA AND LEE: COMPILE-TIME SCHEDULING OF DYNAMIC CONSTRUCTS IN DATAFLOW PROGRAM GRAPHS 777
14 52176
UP- body554
Fig. 10. An example with a For construct at the top level. The
subsystems associated with the For construct are also displayed.
The number inside an actor represents the execution length of
the actor.
The proposed technique considers the tradeoffs of parallelizing
inner loops or parallelizing outer loops in a nested
loop construct, which has been the main problem of parallelizing
compilers for sequential programs. The resulting
Gantt chart for this example is shown in figure 11.
9
Fig. 11. A Gantt chart disply of the scheduling result over 4 processors
from the proposed scheduling technique for the example in
figure 10. The profile of the For construct is identified.
If the number of iteration cycles at run time is less than
or equal to 3, the schedule length of the example is same
as the schedule period 66. If it is greater than 3, the schedule
length will increase. Therefore, the average schedule
length of the example becomes 79.9. The average schedule
length from other scheduling decisions are compared in table
II. The proposed technique outperforms other realistic
methods and achieves 85% of the ideal schedule length by
Method 3. In this example, assigning 4 processors to the
iteration body (Method 1) worsens the performance since
it fails to exploit the intercycle parallelism. Confining the
dynamic construct in a single actor (Method 2) gives the
worst performance as expected since it fails to exploit both
intercycle parallelism and the parallelism of the iteration
body. Since the range of the number of iteration cycles is
not big, assuming the worst case iteration (Method 4) is
not bad.
This example, however, reveals a shortcoming of the proposed
technique. If we assign 2 processors to the iteration
body to exploit the parallelism of the iteration body as well
as the intercycle parallelism, the average schedule length
becomes 77.7, which is slightly better than the scheduling
result by the proposed technique. When we calculate the
expected total cost to decide the optimal number of processors
to assign to the iteration body, we do not account
for the global effect of the decision. Since the difference
of the expected total costs between the proposed technique
and the best scheduling was not significant, as shown in table
I, this non-optimality of the proposed technique could
be anticipated. To improve the performance of the proposed
technique, we can add a heuristic that if the expected
total cost is not significantly greater than the optimal
one, we perform scheduling with that assigned number
and compare the performance with the proposed technique
to choose the best scheduling result.
The search for the assumed number of iteration cycles
for the optimal profile is not faultless either, since the proposed
technique finds a local optimum. The proposed technique
selects 3 as the assumed number of iteration cycles.
It is proved, however, that the best assumed number is
2 in this example even though the performance difference
is negligible. Although the proposed technique is not always
optimal, it is certainly better than any of the other
scheduling methods demonstrated in table II.
Experiments with other dynamic constructs as well as
nested constructs have been performed to obtain the similar
results that the proposed technique outperforms other
ad-hoc decisions. The resulting quasi-static schedule could
be at least 10% faster than other scheduling decisions currently
existent, while it is as little as 15 % slower than
an ideal (and highly unrealistic) fully-dynamic schedule.
In a nested dynamic construct, the compile-time profile of
the inner dynamic construct affects that of the outer dynamic
construct. In general, there is a trade-off between
exploiting parallelism of the inner dynamic construct first
and that of the outer construct first. The proposed technique
resolves this conflict automatically. Refer to [17] for
detailed discussion.
Let us assess the complexity of the proposed scheme. If
the number of dynamic constructs including all nested ones
is D and the number of processors is N , the total number
of profile decision steps is order of ND, O(ND). To determine
the optimal profile also consumes O(ND) time units.
Therefore, the overall complexity is order of ND. The
memory requirements are the same order od magnitude as
the number of profiles to be maintained, which is also order
of ND.
VIII. Conclusion
As long as the data-dependent behavior is not dominating
in a dataflow program, the more scheduling decisions
are made at compile time the better, since we can reduce
the hardware and software overhead for scheduling at run
time. For compile-time decision of task assignment and/or
ordering, we need the static information, called profiles,
of all actors. Most heuristics for compile-time decisions
assume the static information of all tasks, or use ad-hoc
approximations. In this paper, we propose a systematic
method to decide on profiles for each dynamic construct.
We define the compile-time profile of a dynamic construct
as an assumed local schedule of the body of the dynamic
778 IEEE TRANSACTIONS ON COMPUTERS, VOL. 46, NO. 7, JULY 1997
I
The expected total cost of the For construct as a function of the number of assigned processors
Number of assigned processors 1 2 3 4
Expected total cost 129.9 135.9 177.9 N/A
II
Performance comparison among several scheduling decisions
Average schedule length 79.7 90.9 104.3 68.1 90
% of ideal 0.85 0.75 0.65 1.0 0.76
construct. We define the cost of a dynamic construct and
choose its compile-time profile in order to minimize the expected
cost. The cost of a dynamic construct is the sum
of execution length of the construct and the idle time on
all processors at run-time due to the difference between
the compile-time profile and the actual run-time profile.
We discussed in detail how to compute the profile of three
kinds of common dynamic constructs: conditionals, data-dependent
iterations, and recursion.
To compute the expected cost, we require that the statistical
distribution of the dynamic behavior, for example the
distribution of the number of iteration cycles for a data-dependent
iteration, must be known or approximated at
compile-time. For the particular examples we used for ex-
periments, the performance does not degrade rapidly as
the stochastic model deviates from the actual program be-
havior, suggesting that a compiler can use fairly simple
techniques to estimate the model.
We implemented the technique in Ptolemy as a part of
a rapid prototyping environment. We illustrated the effectiveness
of the proposed technique with a synthetic example
in this paper and with many other examples in [17]. The
results are only a preliminary indication of the potential in
practical applications, but they are very promising. While
the proposed technique makes locally optimal decisions for
each dynamic construct, it is shown that the proposed technique
is effective when the amount of data dependency from
a dynamic construct is small. But, we admittedly cannot
quantify at what level the technique breaks down.
Acknowledgments
The authors would like to gratefully thank the anonymous
reviewers for their helpful suggestions. This research
is part of the Ptolemy project, which is supported by the
Advanced Research Projects Agency and the U.S. Air Force
(under the RASSP program, contract F33615-93-C-1317),
the State of California MICRO program, and the following
companies: Bell Northern Research, Cadence, Dolby, Hi-
tachi, Lucky-Goldstar, Mentor Graphics, Mitsubishi, Mo-
torola, NEC, Philips, and, Rockwell.
--R
"Data Flow Languages"
"Synchronous Data Flow"
"Compile-Time Scheduling and Assignment of Dataflow Program Graphs with Data-Dependent Iteration"
"Deterministic Processor Scheduling"
"Multiprocessor Scheduling to Account for Interprocessor Communications"
"A General Approach to Mapping of Parallel Computations Upon Multiprocessor Architecture"
"Task Allocation and Scheduling Models for Multiprocessor Digital Signal Processing"
"Hierarchical Compilation of Macro Dataflow Graphs for Multiprocessors with Local Memory"
"Recurrences, Iteration, and Conditionals in Statically Scheduled Block Diagram Languages"
"Path Length Computation on Graph Models of Computations"
"The Effect of Operation Scheduling on the Performance of a Data Flow Computer"
"Hierar- chical Scheduling Systems for Parallel Architectures"
"TDFL: A Task-Level Dataflow Language"
"Ptolemy: A Framework for Simulating and Prototyping Heterogeneous Sys- tems"
"Program Partitioning for a Reconfigurable Multiprocessor System"
"Compile-Time Scheduling of Dataflow Program Graphs with Dynamic Constructs,"
--TR
--CTR
D. Ziegenbein , K. Richter , R. Ernst , J. Teich , L. Thiele, Representation of process mode correlation for scheduling, Proceedings of the 1998 IEEE/ACM international conference on Computer-aided design, p.54-61, November 08-12, 1998, San Jose, California, United States
Karsten Strehl , Lothar Thiele , Dirk Ziegenbein , Rolf Ernst , Jrgen Teich, Scheduling hardware/software systems using symbolic techniques, Proceedings of the seventh international workshop on Hardware/software codesign, p.173-177, March 1999, Rome, Italy
Jack S. N. Jean , Karen Tomko , Vikram Yavagal , Jignesh Shah , Robert Cook, Dynamic Reconfiguration to Support Concurrent Applications, IEEE Transactions on Computers, v.48 n.6, p.591-602, June 1999
Yury Markovskiy , Eylon Caspi , Randy Huang , Joseph Yeh , Michael Chu , John Wawrzynek , Andr DeHon, Analysis of quasi-static scheduling techniques in a virtualized reconfigurable machine, Proceedings of the 2002 ACM/SIGDA tenth international symposium on Field-programmable gate arrays, February 24-26, 2002, Monterey, California, USA
Chanik Park , Sungchan Kim , Soonhoi Ha, A dataflow specification for system level synthesis of 3D graphics applications, Proceedings of the 2001 conference on Asia South Pacific design automation, p.78-84, January 2001, Yokohama, Japan
Thies , Michal Karczmarek , Janis Sermulins , Rodric Rabbah , Saman Amarasinghe, Teleport messaging for distributed stream programs, Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming, June 15-17, 2005, Chicago, IL, USA
Jin Hwan Park , H. K. Dai, Reconfigurable hardware solution to parallel prefix computation, The Journal of Supercomputing, v.43 n.1, p.43-58, January 2008
Praveen K. Murthy , Etan G. Cohen , Steve Rowland, System canvas: a new design environment for embedded DSP and telecommunication systems, Proceedings of the ninth international symposium on Hardware/software codesign, p.54-59, April 2001, Copenhagen, Denmark
L. Thiele , K. Strehl , D. Ziegenbein , R. Ernst , J. Teich,
FunState | macro actor;dynamic constructs;dataflow program graphs;profile;multiprocessor scheduling |
262549 | Singular and Plural Nondeterministic Parameters. | "The article defines algebraic semantics of singular (call-time-choice) and plural (run-time-choice)(...TRUNCATED) | "Introduction\nThe notion of nondeterminism arises naturally in describing concurrent systems. Vario(...TRUNCATED) | many-sorted algebra;sequent calculus;nondeterminism;algebraic specification |
262588 | "Adaptive Multilevel Techniques for Mixed Finite Element Discretizations of Elliptic Boundary Value (...TRUNCATED) | "We consider mixed finite element discretizations of linear second-order elliptic boundary value pro(...TRUNCATED) | "Introduction\n.\nIn this work, we are concerned with adaptive multilevel techniques for the efficie(...TRUNCATED) | mixed finite elements;multilevel preconditioned CG iterations;a posteriori error estimator |
262640 | Decomposition of Gray-Scale Morphological Templates Using the Rank Method. | "AbstractConvolutions are a fundamental tool in image processing. Classical examples of two dimensio(...TRUNCATED) | "INTRODUCTION\nOTH linear convolution and morphological methods are\nwidely used in image processing(...TRUNCATED) | "structuring element;morphology;morphological template;template rank;convolution;template decomposit(...TRUNCATED) |
263203 | The Matrix Sign Function Method and the Computation of Invariant Subspaces. | "A perturbation analysis shows that if a numerically stable procedure is used to compute the matrix (...TRUNCATED) | "Introduction\n. If A 2 R n\\Thetan has no eigenvalue on the imaginary axis, then the\nmatrix sign f(...TRUNCATED) | matrix sign function;invariant subspaces;perturbation theory |
263207 | An Analysis of Spectral Envelope Reduction via Quadratic Assignment Problems. | "A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was d(...TRUNCATED) | "Introduction\n. We provide a raison d'-etre for a novel spectral algorithm to reduce\nthe envelope (...TRUNCATED) | "sparse matrices;quadratic assignment problems;reordering algorithms;2-sum problem;envelope reductio(...TRUNCATED) |
263210 | Perturbation Analyses for the QR Factorization. | "This paper gives perturbation analyses for $Q_1$ and $R$ in the QR factorization $A=Q_1R$, $Q_1^TQ_(...TRUNCATED) | "Introduction\n. The QR factorization is an important tool in matrix computations\n(see for example (...TRUNCATED) | matrix equations;pivoting;condition estimation;QR factorization;perturbation analysis |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 127