Class Notes (1,100,000)
CA (620,000)
Queen's (10,000)
CISC (500)
Lecture 2

CISC 235 Lecture Notes - Lecture 2: Random-Access Machine, Virtual Memory, Memory Address


Department
Computing
Course Code
CISC 235
Professor
Robin W Dawes
Lecture
2

Page:
of 5
What criterion should we use to choose an appropriate data
structure for an application?
How about "I already understand data structure A, and I don't
understand data structure B"? .... umm, no.
Perhaps "There is a built-in module for data structure A, but I
would have to code data structure B myself"? .... fail!
What could be left? What could be right?
Why, computational complexity, of course! We will prefer
structure A to structure B if A has alower order of complexity for
the operations we need in our particular application.
Before we discuss computational complexity, we need to agree on
the type of computer we are dealing with. In particular, we need to
clarify which operations can be completed in constant time.
The R.A.M. model of computation
We will use the Random Access Machine model, which can be
summarized as follows:
- sequential operations only (i.e. no parallel computation)
- all fundamental operations:
- +, -, *, / and comparisons for integers and floating
point numbers
- comparisons on booleans
- comparisons and type conversions on characters
- execution control
- accessing a memory address
- assigning a value to a variable
find more resources at oneclass.com
find more resources at oneclass.com
take constant time
- no hardware accelerators (such as caches or virtual
memory) are employed
It is important to note that this model implies an upper limit on the
number of digits in any number. This is true of virtually all
programming languages.
The RAM model does not assume constant time operations on
strings. A string is considered to be a data structure consisting of a
sequence of characters.
I'm assuming we are all familiar with "big O" complexity
classification:
Let f(n) and g(n) be non-negative valued functions on the set of
non-negative numbers. f(n) is in O(g(n)) if there are constants c
and n0 such that f(n) <= c*g(n) for all n >= n0
The significance of this is that as n gets large, the growth-rate of
f(n) is no greater than the growth rate of g(n). In other words, the
growth of g(n) is an upper bound on the growth of f(n).
There are several complexity classes that we encounter
frequently. Here is a table listing the most common ones.
Dominant Term
Big-O class
c (a constant)
O(1)
c * log n
O(log n)
c * n
O(n)
c * n * log n
O(n * log n)
find more resources at oneclass.com
find more resources at oneclass.com
c * n^2
O(n^2)
c * n^3
O(n^3)
c * n^k where k is a constant > 3
O(n^k)
c * k^n where k is a constant > 1
O(k^n)
c * n!
O(n!)
Combinations of Functions
If f1(n) is in O(g1(n)), and f2(n) is in O(g2(n)) .....
f1(n) + f2(n) is in O(max(g1(n),g2(n)))
f1(n) * f2(n) is in O(g1(n) * g2(n))
So far this should all be very familiar. But O classification is just
the smallest first step in the field of computational
complexity. There are many other ways of grouping functions
together based on the resources (time and/or space) they
require. We will consider two more: Omega classification and
Theta classification.
Consider this function:
A(n):
if n % 2 == 0:
for i = 1..n:
print '*'
find more resources at oneclass.com
find more resources at oneclass.com