# random.pdf

Unlock Document

Texas A&M University

Aerospace Studies

AERS 105

Lints

Winter

Description

Chapter 9
Random Numbers
This chapter describes algorithms for the generation of pseudorandom numbers with
both uniform and normal distributions.
9.1 Pseudorandom Numbers
Here is an interesting number:
0.814723686393179
This is the ﬁrst number produced by the Matlab random number generator with
its default settings. Start up a fresh Matlab, set format long, type rand, and it’s
the number you get.
If all Matlab users, all around the world, all on diﬀerent computers, keep
getting this same number, is it really “random”? No, it isn’t. Computers are (in
principle) deterministic machines and should not exhibit random behavior. If your
computer doesn’t access some external device, like a gamma ray counter or a clock,
then it must really be computing pseudorandom numbers. Our favorite deﬁnition
was given in 1951 by Berkeley professor D. H. Lehmer, a pioneer in computing and,
especially, computational number theory:
A random sequence is a vague notion ... in which each term is unpre-
dictable to the uninitiated and whose digits pass a certain number of
tests traditional with statisticians ...
9.2 Uniform Distribution
Lehmer also invented the multiplicative congruential algorithm, which is the basis for
many of the random number generators in use today. Lehmer’s generators involve
three integer parameters, a, c, and m, and an initial0value, x , called the seed. A
September 16, 2013
1 2 Chapter 9. Random Numbers
sequence of integers is deﬁned by
xk+1 = ax k c mod m.
The operation “mod m” means take the remainder after division by m. For example,
with a = 13, c = 0, m = 31, and x 0 1, the sequence begins with
1, 13, 14, 27, 10, 6, 16, 22, 7, 29, 5, 3,....
What’s the next value? Well, it looks pretty unpredictable, but you’ve been ini-
tiated. So you can compute (13 ▯ 3) mod 31, which is 8. The ﬁrst 30 terms in
the sequence are a permutation of the integers from 1 to 30 and then the sequence
repeats itself. It has a period equal to m ▯ 1.
If a pseudorandom integer sequence with values between 0 and m is scaled
by dividing by m, the result is ﬂoating-point numbers uniformly distributed in the
interval [0,1]. Our simple example begins with
0.0323, 0.4194, 0.4516, 0.8710, 0.3226, 0.1935, 0.5161,....
There is only a ﬁnite number of values, 30 in this case. The smallest value is 1/31;
the largest is 30/31. Each one is equally probable in a long run of the sequence.
In the 1960s, the Scientiﬁc Subroutine Package (SSP) on IBM mainframe
computers included a random number generator named RND or RANDU. It was a
31
multiplicative congruential with parameters a = 65539, c = 0, and m = 2 . With
a 32-bit integer word size, arithmetic mod 2 31 can be done quickly. Furthermore,
16
because a = 2 +3, the multiplication by a can be done with a shift and an addition.
Such considerations were important on the computers of that era, but they gave
the resulting sequence a very undesirable property. The following relations are all
taken mod 2 :31
16 16 2
xk+2 = (2 + 3)x k+1 = (2 + 3) x k
= (2 32+ 6 ▯ 216+ 9)x
k
= [6 ▯ (26 + 3) ▯ 9]xk.
Hence
xk+2 = 6x k+1 ▯ 9x k for all k.
As a result, there is an extremely high correlation among three successive random
integers of the sequence generated by RANDU.
We have implemented this defective generator in an M-ﬁle randssp. A demon-
stration program randgui tries to compute π by generating random points in a cube
and counting the fraction that actually lie within the inscribed sphere. With these
M-ﬁles on your path, the statement
randgui(@randssp)
will show the consequences of the correlation of three successive terms. The resulting
pattern is far from random, but it can still be used to compute π from the ratio of
the volumes of the cube and sphere. 9.2. Uniform Distribution 3
For many years, the Matlab uniform random number function, rand, was
also a multiplicative congruential generator. The parameters were
5
a = 7 = 16807,
c = 0,
m = 2 31 ▯ 1 = 2147483647.
These values are recommended in a 1988 paper by Park and Miller [11].
This old Matlab multiplicative congruential generator is available in the M-
ﬁle randmcg. The statement
randgui(@randmcg)
shows that the points do not suﬀer the correlation of the SSP generator. They
generate a much better “random” cloud within the cube.
Like our toy generator, randmcg and the old version of the Matlab function
rand generate all real numbers of the form k/m for k = 1,...,m ▯ 1. The smallest
and largest are 0.00000000046566 and 0.99999999953434. The sequence repeats
itself after m ▯ 1 values, which is a little over 2 billion numbers. A few years ago,
that was regarded as plenty. But today, an 800 MHz Pentium laptop can exhaust
the period in less than half an hour. Of course, to do anything useful with 2 billion
numbers takes more time, but we would still like to have a longer period.
In 1995, version 5 of Matlab introduced a completely diﬀerent kind of random
number generator. The algorithm is based on work of George Marsaglia, a professor
at Florida State University and author of the classic analysis of random number
generators, “Random numbers fall mainly in the planes” [6].
Marsaglia’s generator [9] does not use Lehmer’s congruential algorithm. In
fact, there are no multiplications or divisions at all. It is speciﬁcally designed to
produce ﬂoating-point values. The results are not just scaled integers. In place of a
single seed, the new generator has 35 words of internal memory or state. Thirty-two
of these words form a cache of ﬂoating-point numbers, z, between 0 and 1. The
remaining three words contain an integer index i, which varies between 0 and 31, a
single random integer j, and a “borrow” ﬂag b. This entire state vector is built up
a bit at a time during an initialization phase. Diﬀerent values of j yield diﬀerent
initial states.
The generation of the ith ﬂoating-point number in the sequence involves a
“subtract-with-borrow” step, where one number in the cache is replaced with the
diﬀerence of two others:
zi= z i+20 ▯ zi+5 ▯ b.
The three indices, i, i+20, and i+5, are all interpreted mod 32 (by using just their
last ﬁve bits). The quantity b is left over from the previous step; it is either zero or
a small positive value. If the computed z ii positive, b is set to zero for the next
step. But if the computed z ii negative, it is made positive by adding 1.0 before it
is saved and b is set to 2▯53 for the next step. The quantity 2 ▯53, which is half of
the Matlab constant eps, is called one ulp because it is one unit in the last place
for ﬂoating-point numbers slightly less than 1. 4 Chapter 9. Random Numbers
By itself, this generator would be almost completely satisfactory. Marsaglia
1430
has shown that it has a huge period—almost 2 values would be generated before
it repeated itself. But it has one slight defect. All the numbers are the results of
ﬂoating-point additions and subtractions of numbers in the initial cache, so they
are all integer multiples of 2 ▯53 . Consequently, many of the ﬂoating-point numbers
in the interval [0,1] are not represented.
The ﬂoating-point numbers between 1/2 and 1 are equally spaced with a
spacing of one ulp, and our subtract-with-borrow generator will eventually generate
all of them. But numbers less than 1/2 are more closely spaced and the generator
would miss most of them. It would generate only half of the possible numbers in
the interval [1/4,1/2], only a quarter of the numbers in [1/8,1/4], and so on. This
is where the quantity j in the state vector comes in. It is the result of a separate,
independent, random number generator based on bitwise logical operations. The
ﬂoating-point fraction of each z isiXORed with j to produce the result returned by
the generator. This breaks up the even spacing of the numbers less than 1/2. It is
now theoretically possible to generate all the ﬂoating-point numbers between 2 ▯53
▯53
and 1 ▯ 2 . We’re not sure if they are all actually generated, but we don’t know
of any that can’t be.
Figure 9.1 shows what the new generator is trying to accomplish. For this
graph, one ulp is equal to 2 ▯4 instead of 2 ▯53 .
1
1/2
1/4
1/8
1/11/8 1/4 1/2
Figure 9.1. Uniform distribution of
oating-point numbers.
The graph depicts the relative frequency of each of the ﬂoating-point numbers.
A total of 32 ﬂoating-point numbers are shown. Eight of them are between 1/2 and
1, and they are all equally like to occur. There are also eight numbers between
1/4 and 1/2, but, because this interval is only half as wide, each of them should
occur only half as often. As we move to the left, each subinterval is half as wide as
the previous one, but it still contains the same number of ﬂoating-point numbers,
53
so their relative frequencies must be cut in half. Imagine this picture with 2
numbers in each of 2 32 smaller intervals and you will see what the new random 9.3. Normal Distribution 5
number generator is doing.
With the additional bit ﬁddling, the period of the new generator becomes
something like 2149. Maybe we should call it the Christopher Columbus generator.
In any case, it will run for a very long time before it repeats itself.
9.3 Normal Distribution
Almost all algorithms for generating normally distributed random numbers are
based on transformations of uniform distributions. The simplest way to gener-
ate an m-by-n matrix with approximately normally distributed elements is to use
the expression
sum(rand(m,n,12),3) - 6
This works because R = rand(m,n,p) generates a three-dimensional uniformly dis-
tributed array and sum(R,3) sums along the third dimension. The result is a
two-dimensional array with elements drawn from a distribution with mean p/2 and
variance p/12 that approaches a normal distribution as p increases. If we take
p = 12, we get a pretty good approximation to the normal distribution and we
get the variance to be equal to one without any additional scaling. There are two
diﬃculties with this approach. It requires twelve uniforms to generate one normal,
so it is slow. And the ﬁnite p approximation causes it to have poor behavior in the
tails of the distribution.
Older versions of Matlab—before Matlab 5—used the polar algorithm.
This generates two values at a time. It involves ﬁnding a random point in the
unit circle by generating uniformly distributed points in the [▯1,1]▯[▯1,1] square
and rejecting any outside the circle. Points in the square are represented by vectors
with two components. The rejection portion of the code is
r = Inf;
while r > 1
u = 2*rand(2,1)-1
r = u’*u
end
For each point accepted, the polar transformation
v = sqrt(-2*log(r)/r)*u
produces a vector with two independent normally distributed elements. This algo-
rithm does not involve any approximations, so it has the proper behavior in the tails
of the distribution. But it is moderately expensive. Over 21% of the uniform num-
bers are rejected if they fall outside of the circle, and the square root and logarithm
calculations contribute signiﬁcantly to the cost.
Beginning with Matlab 5, the normal random number generator randn uses a
sophisticated table lookup algorithm, also developed by George Marsaglia. Marsaglia
calls his approach the ziggurat algorithm. Ziggurats are ancient Mesopotamian ter-
raced temple mounds that, mathematically, are two-dimensional step functions. A
one-dimensional ziggurat underlies Marsaglia’s algorithm. 6 Chapter 9. Random Numbers
Marsaglia has reﬁned his ziggurat algorithm over the years. An early version
is described in Knuth’s classic The Art of Computer Programming [5]. The version
used in Matlab is described by Marsaglia and W. W. Tsang in [7]. A Fortran
version is described in [2, sect. 10.7]. A more recent version is available in the
online electronic Journal of Statistical Software [8]. We describe this recent version
here because it is the most elegant. The version actually used in Matlab is more
complicated, but is based on the same ideas and is just as eﬀective.
The probability density function, or pdf, of the normal distribution is the
bell-shaped curve 2
f(x) = ce ▯x =2 ,
1=2
where c = 1/(2π) is a normalizing constant that we can ignore. If we generate
random points (x,y), uniformly distributed in the plane, and reject any of them that
do not fall under this curve, the remaining x’s form our desired normal distribution.
The ziggurat algorithm covers the area under the pdf by a slightly larger area with
n sections. Figure 9.2 has n = 8; actual code might use n = 128. The top n ▯ 1
sections are rectangles. The bottom section is a rectangle together with an inﬁnite
tail under the graph of f(x). The right-hand edges of the rectangles are at the
points z kk = 2,...,n, shown with circles in the picture. With f(z ) = 1 and 1
f(z n+1 ) = 0, the height of the kth section is f(z ) ▯kf(z k+1 ). The key idea is to
choose the z ’k so that all n sections, including the unbounded one on the bottom,
have the same area. There are other algorithms that approximate the area under
the pdf with rectangles. The distinguishing features of Marsaglia’s algorithm are
the facts that the rectangles are horizontal and have equal areas.
1.00
0.76
0.59
0.45
0.33
0.23
0.14
0.06
0.00 0.741.03 1.26 1.49 1.72 2.34
Figure 9.2. The ziggurat algorithm.
For a speciﬁed number, n, of sections, it is possible to solve a transcendental
equation to ﬁnd z , tne point where the inﬁnite tail meets the last rectangular
section. In our picture with n = 8, it turns out that z = 2.3n. In an actual code
with n = 128, z = 3.4426. Once z is known, it is easy to compute the common
n n 9.4. randtx, randntx 7
area of the sections and the other right-hand endpoints z k It is also possible to
compute σ =kz k▯1 /zk, which is the fraction of each section that lies underneath
the section above it. Let’s call these fractional sections the core of the ziggurat.
The right-hand edge of the core is the dotted line in our picture. The computation
of these k ’s and σk’s is done in initialization code that is run only once.
After the initialization, normally distributed random numbers can be com-
puted very quickly. The key portion of the code computes a single random integer,
j, between 1 and n and a single uniformly distributed random number, u, between
▯1 and 1. A check is then made to see if u falls in the core of the jth section. If it
does, then we know that uz ij the x-coordinate of a point under the pdf and this
value can be returned as one sample from the normal distribution. The code looks
something like this.
j = ceil(128*rand);
u = 2*rand-1;
if abs(u) < sigma(j)
r = u*z(j);
return
end
Most of the σ js are greater than 0.98, and the test is true over 97% of the time.
One normal random number can usually be computed from one random integer, one
random uniform, an if-test, and a multiplication. No square roots or logarithms are
required. The point determined by j and u will fall outside the core less than 3% of
the time. This happens if j = 1 because the top section has no core, if j is between
2 and n▯1 and the random point is in one of the little rectangles covering the graph
of f(x), or if j = n and the point is in the inﬁnite tail. In these cases, additional
computations involving logarithms, exponentials, and more uniform samples are
required.
It is important to realize that, even though the ziggurat step function only
approximates the probability density function, the resulting distribution is exactly
normal. Decreasing n decreases the amount of storage required for the tables and
increases the fraction of time that extra computation is required, but does not aﬀect
the accuracy. Even with n = 8, we would have to do the more costly corrections
almost 23% of t

More
Less
Related notes for AERS 105