Class Notes (1,100,000)

CA (620,000)

Queen's (10,000)

CISC (500)

CISC 235 (20)

Robin W Dawes (20)

Lecture 4

This

**preview**shows pages 1-3. to view the full**12 pages of the document.**CISC 235 – Lecture 4, 5, 6

Let's look at a simple example of determining the Theta

classification of a problem. The problem we will look at is

evaluating a polynomial

f(x) = c(n)*x^n + c(n-1)*x^(n-1) + ... c(2)*x^2 + c(1)*x^1 + c(0)

(In the line above I am using c(n), c(n-1) etc to represent c with

subscript n, c with subscript n-1, etc.)

First, we can observe that any algorithm that solves this must at

the very least read or otherwise receive the values of x and the n+1

coefficients. Thus we can easily see that every algorithm for this

problem must be in Omega(n).

Consider the simple algorithm I will call BFI_Poly:

BFI_Poly(x,c[n] ... c[0]):! value = c[0]! for i = 1

.. n:! power = 1! for j = 1 ..

i:! power *= x! value +=

c[i]*power! return value!!!BFI_Poly() clearly runs in O(n^2)

time (you should verify this if it is not already familiar)

This is where algorithm (and data-structure) designers start to get

excited. We have a problem with a lower bound of Omega(n),

and an algorithm that is in O(n^2) ... can we either increase the

lower bound, or decrease the upper bound?

It turns out that for this problem we can decrease the upper bound

by finding a better algorithm - namely, Horner's rule:

Horners_Poly(x,c[n] ... c[0]):! value = c[n]! for i

= n-1 .. 0:! value = value*x + c[i]! return

value!

You should be able to verify that Horners_Poly correctly evaluates

f(x) and that it runs in O(n) time.

Only pages 1-3 are available for preview. Some parts have been intentionally blurred.

(As a side-issue, can you find an easy way to modify BFI_Poly so

that it also runs in O(n) time?)

Now we are in clover - the upper bound on our algorithm exactly

matches the lower bound on the problem. We can now say that the

problem is in Theta(n). This really is very good news - it means

we have found an algorithm for this problem that cannot be beat!

Well ... sort of.

It means our algorithm has the lowest possible complexity. There

may be another algorithm with the same complexity and a lower

value of c, the constant multiplier. This is what we see when we

compare mergesort and Quicksort: they have the same O(n*log n)

complexity, but Quicksort is faster in general because it has a

lower constant multiple. (Yes, of course I know that Quicksort

has worst-case O(n^2) complexity the way it is normally

implemented. It is actually possible to modify Quicksort so that

you can guarantee O(n*log n) performance but hardly anyone

bothers because the pathological situations that give rise to the

O(n^2) performance are very rare.)

At this point we closed our introduction of the Omega and Theta

complexity classifications, and finally turned our attention to a data

structure: the stack

Consider the problem of evaluating an arithmetic expression such

as 3 + 4 * 7 + 8 * (3 - 1)

Most people in North America have been taught that parentheses

have highest precedence, followed by exponentiation, then

multiplication and division, then addition and subtraction, so

the expression above evaluates to 3 + 28 + 16 = 47

But these precedence rules are completely arbitrary. For example,

we could keep the rule about parentheses but do everything else in

simple left-to-right order ... which would give 114 ... or right-to-

Only pages 1-3 are available for preview. Some parts have been intentionally blurred.

left order ... which would give 103 ... or give addition higher

precedence than multiplication ... which would give

210 (assuming I have done the calculations properly )

The notation we have used here to write down the expression is

called infix notation because the operators (*, +, etc) are placed

between the operands (3, 4, 7, etc)

In order to evaluate an infix expression correctly we need to know

exactly what rules of precedence were assumed by the person who

created the expression. Wouldn't it be wonderful if there were a

universal way to represent an expression so that no matter what

rules of precedence are in use, the method of evaluating the

expression is always the same?

There is! It is called postfix notation, and it was invented in 1924

by Jan Lukaseiwicz ... because he was Polish, this is sometimes

called Polish Postfix notation. In postfix notation, operators come

after their operands, so "3 * 4" (infix) becomes "3 4 *" (postfix)

The expression we started with : 3 + 4 * 7 + 8 * (3 - 1) can be

written as 3 4 7 * + 8 3 1 - * +

We can evaluate this in simple left-to-right order ... we keep going

until we hit an operator (the *) and then we apply it to the two

numbers just before it: 4 * 7 = 28, and we put the result in place

where the 4 7 * was, so now the expression is 3 28 + 8 3 1 - * +

The next thing we see is the +, which we apply to the numbers just

in front of it (3 and 28) and put the result back in the expression,

giving

31 8 3 1 - * +

The next operator we find is -, which we apply to 3 1. The

expression is now

31 8 2 * +

The next operator is *, applied to the 8 2. This gives

31 16 +

We apply the + to the numbers before it, giving a final result of 47

... which is exactly the result we expect from the original

###### You're Reading a Preview

Unlock to view full version