Tuesday, March 12, 2013
Good news: computersare really fast at arithmetic
Bad news: Computer arithmetic is not exact!
Good enough much of the time, but not always.
Must understand shortcomings and how to cope with them.
Every time the computer stores something, it boils down to lots and lots of bits (binary items, base 2)
We have in any device sub-divided into billions of microscopicon/off switches, 1/0.
- Almost always sorted into groups of 8, called a byte
- Most numeric types have set numbers of bits used. If that's not enough, we have a problem
- A sequence of bits, interpreted as a binary number (base 2)
- 21 in decimal: 2x10^1+ 2x10^0
- In binary: 1x2^4(16)+ 0x2^3(8)+ 1x2^2(4)+ 0x2^1(2)+ 1x2^0(1)
- Python ints are usually 32 bits long, one bit used for sign, so range is -2^31 to 2^31-1
○ Range is not symmetricbecause 0 is in the middle
- Integer overflowor underflow:
○ -2147483648to 214748367
○ Trying to store a value that's too big: overflow
○ Too small: underflow
- Other languages(& older versions of Python):
○ Python 2.1: 9999^8:OverflowError:integer exponentiation
- Python long type: integer values with no restrictions on size.
○ Python 2.7 or 2.8: 9999^8 = really long number with an L at the end
○ Later versions: Removedthe distinction between int and long
Do not get in the habit of assuming this for every language
Floating Point (Decimal)
Recall scientific notation (base 10):
- .12 = 0.12 mantissa
- 6 = 6 exponent
○ no digits before the mantissa
○ does not start with zero
- 21 in base 10 = 10 10101base 2 = 2 .10101x 2^101