false

Class Notes
(837,826)

Canada
(510,504)

University of Toronto Scarborough
(31,510)

Computer Science
(312)

CSCC73H3
(10)

Vassos Hadzilacos
(10)

Lecture

by
OneClass4706

Unlock Document

Computer Science

CSCC73H3

Vassos Hadzilacos

Fall

Description

Computer Science C73 September 28, 2010
Scarborough Campus University of Toronto
Solutions for Homework Assignment #1
Answer to Question 1. The greedy algorithm that minimises the maximum lateness does not also
minimise the sum of latenesses. A counterexample follows:
Job Length Deadline
a 3 1
b 1 2
The greedy algorithm that minimises maximum lateness orders the jobs in increasing deadline. In this
example, the algorithm orders a before b, resulting in lateness of 2 for job a and 2 for job b, so the sum of
latenesses for the two jobs is 4. In the schedule where b is before a, however, job b has lateness 0 and job
a has lateness 3, so the sum of latenesses of the two jobs is 3.
A clue that the algorithm that minimises maximum lateness does not also minimise the sum of lateness
is provided by the proof of optimality for the former. A crucial step in that proof, where we showed that
ﬁxing an inversion does not increase the maximum lateness, does not apply to the sum of the latenesses.
Answer to Question 2. a. The ﬁgure below shows a counterexample.
0 I 16
2 I1 14
0 I 9
2
7 I 16
3
In this case, the proposed greedy algorithm would start by putting I i1to the set of intervals kept, and
would therefore be forced to include all three intervals I1, I2and I 3o cover I, while the intervals I a2d
I3suﬃce.
b. We maintain a subset A of the intervals {I ,1 ,2..,I }nthat cover an initial segment of I = [S,F], from
S to some intermediate point C, S ≤ C ≤ F. Initially A = ∅ and C = S. In each stage we greedily
expand A by an interval I jhose left endpoint is ≤ C and whose right endpoint extends as far to the right
as possible, and we update C accordingly. The algorithm is shown below in pseudocode.
let I = [S,F] and i = [i ,i ] for i ∈ [1..n]
A := ∅
C := S
while C < F do
let j ∈ [1..n] be s.j. s ≤ C and ∀k ∈ [1..n],kif s ≤ C thjn f k f
A := A ∪ {I }
j
C := fj
return A
1 Correctness. We prove the correctness of this algorithm using the “promising set” approach. Speciﬁ-
∗
cally, we will prove that the set of intervals in A is always contained in some optimal set of intervals A .
(We say that a subset of {I ,...1I } is nptimal if it covers I using the smallest number of intervals in that
set.) The following invariants can be proved using a straightforward induction, which we omit:
(a) If A ▯= ∅, the set of intervals in A cover [S,C].
(b) If A ▯= ∅, then C is the maximum right endpoint of all intervals in A.
We now prove the following
Key Invariant. The set of intervals in A is contained in some optimal set of intervals.
The proof is by induction on the iteration number. Let A and C be the talues oftvariables A and C,
respectively, at the end of iteration t. (By convention, “the end of iteration 0” refers to the point just
before the ﬁrst iteration of the loop.)
∗
The basis is trivially true, since A = ∅. 0uppose the invariant holds after iteration t, and let A
be an optimal set that contains A . We witl now prove that the invariant remains true after iteration
t + 1 (if there is such an iteration). Let I = [j ,f ] je jhe interval added to A in iteration t + 1. That is,
∗ ∗ ∗ ∗
A t+1 = A ∪tI }. jf I ∈ Ajthen obviously A t+1 ⊆ A and we are done. So, suppose I ∈ j / A . Then A −A t
must contain some interval I ′ = [s ,f ], j ∈ [1..n], such that s ′ ≤ C . (If not, all intervals in A − A∗
j j j j t t
start strictly after C atd, by Invariant (b) above, all intervals in A end no ltter than C . Therefore, the
∗ ∗
intervals in A don’t cover all of I, contradicting that A is an optimal set.) By the algorithm, I is such j
that f j f j′. Therefore A − {I j′} ∪ {Ij} covers I; furthermore this set has the same number of intervals
∗ ∗
as A , so it is also optimal. Finally, A t+1 = A ∪t{I } j A − {I } ∪ {I j. So, A j t+1 is contained in an
optimal set, as wanted.
By the exit condition, at the end of the loop C ≥ F, so by Invariant (a) above, the set of intervals in A
covers I. By the Key Invariant, at the end of the algorithm, the set of intervals in A is contained in some
optimal set. Since this set covers I, it is an optimal set. So, if the loop terminates, the algorithm returns
an optimal set.
It remains to prove that the loop terminates. To see this, note that in each iteration of the loop an
interval in {I ,1..,I } ns added to A. So, if the loop does not terminate earlier, after n iterations all
intervals will have been added to A. Since the entire set of intervals {I ,...,I } 1overs I,nby the time all
intervals have been added to A (if not earlier), the intervals in A cover I and so, by Invariants (a) and (b),
the loop terminates.
Time complexity. As just argued, the loop terminates after at most n iterations. In each iteration, we
can ﬁnd the next interval to add to A in O(n) time by scanning the given list of n intervals to ﬁnd the one
whose left endpoint is at most C and whose right endpoint is as large as possible. (By using cleverer data
structures you learned in CSCB63, this can actually be done in O(logn) time: We can store the intervals
in the nodes of an augmented balanced search tree, using the left endpoint of the interval as the node’s
key and also storing in each node the maximum right endpoint of all intervals in the subtree rooted at
2
that node.) Thus the algorithm can be implemented in O(n ) time (or, with the cleverer data structure,
in O(nlogn) time).
Answer to Question 3. a. This strategy is not optimal, as the following counterexample shows. Suppose
we have two canoeists, of heights 1 and 4, and two paddles of length 3 and 6.
The greedy strategy under consideration assigns the paddle of length 3 to the canoeist of height 4, and ▯
the paddle of length 6 to the canoeist of height 1. The average penalty of this assignment is 1 (4 − 3) +
▯ 2
(6 − 1) = 3. The other assignment, where the paddle of length 3 goes to the canoeist ▯f height 1 and the▯
paddle of length 6 goes to the canoeist of height 4 has average penalty 1 (3 − 1) + (6 − 4) = 2. Thus, the
2
greedy strategy under consideration is not always optimal.
2 b. This strategy always produces an optimal assignment. We will use the exchange method to prove this.
Recall that we represent an assignment of paddles to canoeists by a permutation π of 1,2,...,n, where
π(i) is the paddle assigned to canoeist i. We deﬁne an inversion of such a permutation to be a pair
i,j such that h p π(j). That is, an inversion corresponds to a situation where a shorter
canoeist is given a longer paddle than a taller canoeist.
The greedy algorithm under consideration produces an assignment with no inversions. To prove that
such an assignment is optimal, it suﬃces to prove the following two facts:
Fact 1. Any two assignments that have no inversions have the same average penalty.
Fact 2. There is an optimal assignment that has no inversions.
To prove Fact 1, note that two assignments that have no inversions can only diﬀer in the assignment
of paddles to canoeists of the same shoulder height, or the assignment to canoeists of paddles of the same
length. It is immediate from the formula for the average penalty, that any two such assignments have t

More
Less
Related notes for CSCC73H3

Join OneClass

Access over 10 million pages of study

documents for 1.3 million courses.

Sign up

Join to view

Continue

Continue
OR

By registering, I agree to the
Terms
and
Privacy Policies

Already have an account?
Log in

Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.