false

Class Notes
(836,135)

Canada
(509,645)

University of Waterloo
(18,569)

Computer Science
(798)

CS 341
(26)

Forbes Burkowski
(15)

Lecture

Unlock Document

Computer Science

CS 341

Forbes Burkowski

Fall

Description

Dynamic Programming
Dynamic programming was invented by Richard
Bellman in the early 1950s while he worked for the
RAND corporation on multistage decision processes.
Originsof the term Dynamic Programming (1950):
What title, what name, could I choose? In the first place I was interested in planning,
in decision making, in thinking. But planning, is not a good word for various reasons. I
decided therefore to use the word, programming. I wanted to get across the idea that
this was dynamic, this was multistage, this was time-varyingI thought, lets kill two
birds with one stone. Lets take a word that has an absolutely precise meaning, namely
adjective, and that is its impossible to use the word, dynamic, in a pejorative sense.
Try thinking of some combination that will possibly give it a pejorative meaning. Its
impossible. Thus, I thought dynamic programming was a good name. It was something
not even a Congressman could object to.
Dynamic Programming Overview
Dynamic programming solves optimization problems.
The overall strategy underlying the algorithm is that the
given problem can be solved by first solving smaller
subproblems that have the same structure. Two features
characterize these subproblems:
1.Unlike the divide-and-conquer algorithm, which relies
on subproblems being independent, dynamic
programming handles problems that have overlapping
subproblems.
2.We can apply a recursive strategy: We find an optimal
solution of each subproblem by first finding optimal
solutions for its contained subproblems.
1 Coins Again
We are given an unlimited number of coins of
various denominations: d , d , 1, d2. m
We have the same objective as before: Pay out a given
sum S with the smallest number of coins possible.
The denomination system presents us with three
possibilities:
The d values are such that a greedy strategy can pay
i
out the sum S in an optimal fashion.
The d values are such that a greedy strategy cannot
i
pay out the sum S in an optimal fashion although such
an optimal payout does exist.
The d values are such that no strategy can pay out the
i
amount S.
(Trivialexample: m = 1,id = 2 cent coin and S is an odd amount).
A Recursive Strategy for the Coin Problem
Find an algorithm that produces an optimal solution
for any denomination system or an indication that
no solution exists.
Notation:
Let coins[i] be the smallest number of coins required to
pay out the sum i.
Weassume coins[i] will be undetermined if no solution exists.
Note that coins[0] = 0 (no coins needed).
Idea behind the solution:
Sequentially try all possibilities for the first coin paid out.
If the coin paid outiis d, we now recursively solve the change
problem for S- i.
So, we can solve the problem recursivelybut with the added
constraint the we always ask for the minimumcoin number.
2 A Recursive Strategy for the Coin Problem
These considerations lead to the following recursion:
coins[i] = 1+ min{coins[i d ]}. j
dji
This recursion will be at the heart of our dynamic
programming algorithm.
There are two approaches that can be used in working with
this recursion:
1. Wecan insist that the recursion defines a recursive function that
simply calls itself to get the optimal values associated with the
smaller problems. (This is not dynamic programming).
OR
2. Wecan consider coins to be an array that stores the optimal
valuesassociated with the smaller problems.
Just to eliminate it from any further discussion, we
start by showing that the first approach is
extremely bad!
A Recursive Function for the Coin Problem
We start with: coins[i] = 1 + mdj iins[i d ]}.j
Recalling that recursive calls were used in divide and conquer we go
for the following implementation:
D&C for CP Pseudo-code 1:
function coins(i):
if (i=0) then return 0; // base case
// recursion:
min:=infinity; // Use a very large number here
for j:=0 to m do
if (d[j] <= i) then
smaller_sol := coins(i - d[j]);
if smaller_sol < min then min := smaller_sol;
return 1 + min;
The main program calls coins(S).
3 A Recursive Strategy for the Coin Problem
Analysis:
The execution time is exponential in S.
Consider the recursion tree for the calling of
coins(S) with denominations 1 and 2.
T(S) = T(S-1) + T(S-2)
n is the bit length of the input.
Execution time: Execution time is
S 2
1+ 5 1+ 5 hyper-exponential
= with respect to the input
2 2 bit length!
Why is this? Observe that many of the recursive
calls are redundant.
For example, coins(2) is called several times.
Dynamic Programming Solution
Recursive calls worked for divideand conquer because the same
subproblems did not reappear in several areas of the recursion tree.
Here is a better way to use the recursion formula:
Consider pseudo-code 2:
NOTE: coins[ ] is now an array.
coins[0] := 0;
for i := 1 to S do // Do minimal payout for all values < S.
min := infinity;
for j := 1 to m do
if d[j] <= i and coins[i - d[j]] < min then
min := coins[i - d[j]];
coins[i] := 1 + min;
return coins[S];
4

More
Less
Related notes for CS 341

Join OneClass

Access over 10 million pages of study

documents for 1.3 million courses.

Sign up

Join to view

Continue

Continue
OR

By registering, I agree to the
Terms
and
Privacy Policies

Already have an account?
Log in

Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.