Class Notes (1,100,000)
CA (650,000)
UW (20,000)
CS (1,000)
CS341 (20)
Lecture

# CS341 Lecture Notes - Dynamic Programming, Richard E. Bellman, Artery Recordings

Department
Computer Science
Course Code
CS341
Professor
Forbes Burkowski

This preview shows pages 1-3. to view the full 26 pages of the document.
1
Dynamic Programming
Dynamic programming was invented by Richard
Bellman in the early 1950’s while he worked for the
RAND corporation on “multistage decision processes”.
Origins of the term “Dynamic Programming” (1950):
“What title, what name, could I choose? In the first place I was interested in planning,
in decision making, in thinking. But planning, is not a good word for various reasons. I
decided therefore to use the word, ‘programming.’ I wanted to get across the idea that
this was dynamic, this was multistage, this was time-varying—I thought, let’s kill two
birds with one stone. Let’s take a word that has an absolutely precise meaning, namely
dynamic, in the classical physical sense. It also has a very interesting property as an
adjective, and that is it’s impossible to use the word, dynamic, in a pejorative sense.
Try thinking of some combination that will possibly give it a pejorative meaning. It’s
impossible. Thus, I thought dynamic programming was a good name. It was something
not even a Congressman could object to.”
Dynamic Programming Overview
Dynamic programming solves optimization problems.
The overall strategy underlying the algorithm is that the
given problem can be solved by first solving smaller
subproblems that have the same structure. Two features
characterize these subproblems:
1.Unlike the divide-and-conquer algorithm, which relies
on subproblems being independent, dynamic
programming handles problems that have overlapping
subproblems.
2.We can apply a recursive strategy: We find an optimal
solution of each subproblem by first finding optimal
solutions for its contained subproblems.

Only pages 1-3 are available for preview. Some parts have been intentionally blurred.

2
Coins Again
We are given an unlimited number of coins of
various denominations: d1,d2, …, dm.
We have the same objective as before: Pay out a given
sum Swith the smallest number of coins possible.
The denomination “system” presents us with three
possibilities:
The divalues are such that a greedy strategy can pay
out the sum Sin an optimal fashion.
The divalues are such that a greedy strategy cannot
pay out the sum Sin an optimal fashion although such
an optimal payout does exist.
The divalues are such that no strategy can pay out the
amount S.
(Trivial example: m = 1, di= 2 cent coin and Sis an odd amount).
A Recursive Strategy for the Coin Problem
Find an algorithm that produces an optimal solution
for any denomination system or an indication that
no solution exists.
• Notation:
Let coins[i] be the smallest number of coins required to
pay out the sum i.
We assume coins[i]will be “undetermined” if no solution exists.
Note that coins[0] = 0 (no coins needed).
Idea behind the solution:
Sequentially try all possibilities for the first coin paid out.
If the coin paid out is di, we now recursively solve the change
problem for S- di.
So, we can solve the problem recursively but with the added
constraint the we always ask for the minimum coin number.

Only pages 1-3 are available for preview. Some parts have been intentionally blurred.

3
A Recursive Strategy for the Coin Problem
These considerations lead to the following recursion:
This recursion will be at the heart of our dynamic
programming algorithm.
There are two approaches that can be used in working with
this recursion:
1. We can insist that the recursion defines a recursive function that
simply calls itself to get the optimal values associated with the
smaller problems. (This is not dynamic programming).
OR
2. We can consider coins to be an array that stores the optimal
values associated with the smaller problems.
Just to eliminate it from any further discussion, we
start by showing that the first approach is
[ ] 1 min{ [ ]}.
jj
d i
coins i coins i d
= +
A Recursive Function for the Coin Problem
Recalling that recursive calls were used in divide and conquer we go
for the following implementation:
D&C for CP Pseudo-code 1:
function coins(i):
if (i=0) then return 0; // base case
// recursion:
min:=infinity; // Use a very large number here…
for j:=0 to m do
if (d[j] <= i) then
smaller_sol := coins(i - d[j]);
if smaller_sol < min then min := smaller_sol;
return 1 + min;
The main program calls coins(S).
[ ] 1 min{ [ ]}.
jj
d i
coins i coins i d
= +