This

**preview**shows half of the first page. to view the full**1 pages of the document.**Bubble-up (Node i ):Time = O(logn) use 4 insert

while (p(i ) exists) and (value(p(i)) < value(i)) {

exchange values(i ) and value(p(i ));

i ← p(i );}

Bubble-down (Node i ): Time:O(logn) use EMax

while (i has children) {

let j be the child of i with the largest value

if value(j) > value(i ) then

exchange values(i ) and value(j);

i ← j ;

otherwise

break;}

Heapsort /* Heapify: */Equal(n)

for i := n/2 downto 1

Bubble-down(i)

/* ExtractMax one at a time */

while (n > 1) {

Swap(A[n], A[1]);

n--;

Bubble-down(1);}

QuickSort(int[] A, int left, int right):

if left < right

int i := Partition(A, left, right);

QuickSort(A, left, i-1);

QuickSort(A, i+1, right);

Partition(A, left, right):

pivot := A[right];

i := left-1;

for j := left to right

if A[j] <= pivot

i := i+1

swap(A[j], A[i])

return i;

Running time of Partition:

Equal(right − left + 2).

Auxiliary Space: Equal(1).

QuickSortRecurseOnSmallerPart(int[] A, int left, int right):

int myLeft := left, myRight := right;

while (myLeft < myRight) do

| int i := Partition(A, myLeft, myRight);

| if (i-myLeft) < (myRight - i) then

| | QuickSortRecurseOnSmallerPart(A, myLeft, i-1);

| | myLeft = i+1

| else

| | QuickSortRecurseOnSmallerPart(A, i+1, myRight);

| | myRight = i-1

RandomPermute(A, n):

for i ← 1 to n

exchange(A[i ], A[random(i , n)])

Theorem

The height of an AVL tree with n nodes is O(log n).

Proof: What is the least number of nodes (nh) of an AVL-tree of height h?

Recurrence relation: ni = ni−1 + ni−2 + 1.

ni = Fi+3 – 1 = floor( feta^(i+3) / sqrt(5) ) + ½ - 1

Therefore, ni = equal( feta^I )

Hence, n ≥ nh = equal(feta^I.) implies h = O(logn)

Extendable hashing: search

Given a key k, compute h(k) = a1a2 . . . aL.

◮ Probe block B(a1 . . . ad ).

◮ Perform a binary search to find all items in the block.

If the directly resides in internal memory, we have only a single memory

block transfer.

Insert(x) is done as follows:

◮ Search for x to find the proper block to insert

◮ If the block has space then done

◮ If the block is full (i.e. has already S keys) and k < d, perform ablock split:

◮ Split the block into two blocks.

◮ Separate items according to the (k+1)st bit

◮ Update references

◮ Increase the local depth k in those blocks.

◮ Repeat until room or k = d.

◮ If the block is full and k = d, perform a directory split

◮ double the size of the direct (d = d + 1)

◮ Update references appropriately.

◮ back to the previous cases.

◮ The hash table is much smaller than the number of stored keys and

therefore may fit in the main memory.

◮ Most operations should access only O (1) leaf pages.

◮ To make more space, we only add a block. Rarely do we have to copy

the hash table. Never do we have to move all items in dictionary (in

contrast to open addressing).

◮ main disadvantage: we use more space than actually needed (but one

can show that if keys are uniformly distributed then pages are

expected to be %69 full).

◮ We can implement dictionaries such that all operations take O(1)

expected run time.

◮ Hashing with chaining:

◮ keep load factor constant.

◮ no explicit need to rehash

◮ Hashing with open addressing

◮ keep load factor much smaller than 1.

◮ rehashing is required often.

◮ fast computation (linear hashing) vs. better average time (double

hashing)

◮ deletion is problematic.

◮ no space is wasted.

◮ We are strongly depending on a good hash function, otherwise

hashing is terrible.

◮ Hashing can be extended naturally to situations where data does not

reside in main memory.

◮ In practice, usually hashing is preferred over balanced trees.

Priority queue sort

for i ← 1 . . . n do

PQ.insert(A[i]);

for i ← n . . . 1 do

A[i] = PQ.extractMax();

QuickSort Running time details

Rotations

Predecessor: at most right in leftsubtree

Successor : at most left in rightsubtree

Flooring and ceiling

3/2 floor(1) ceiling (2)

5/2 floor(2) ceiling (3)

7/2 floor(3) ceiling (4)

Summation equations

◮ Unsuccessful Search(x):

◮ Computing the hash value h(x) takes Equal(1).

◮ A list is searched to the end of the list.

◮ By uniform hashing assumption x is equally likely

to be hashed to any of the M buckets, and therefore

the expected time to search unsuccessfully for x is

the expected length of a list which is alfa

◮ Hence an unsuccessful search has expected time

Equal(1 + alfa).

◮ Successful Search(x):

◮ Suppose k1, . . . , kn are the n keys in the hash

table and they wereinserted in that order.

◮ Computing the hash value h(x) takes Equal(1).

◮ A list T[h(x)] is searched to the position where x

lies.

◮ Since elements are inserted at the front of the list,

in Search for x = ki ,

we look at those of ki+1, . . . , kn which are hashed

to the same bucketas x. The expected number of

these is (n − i )/M.

◮ Therefore, the expected length of search is

###### You're Reading a Preview

Unlock to view full version