–If C consumes faster than P produces, C will have to wait.
–Communication rate varies over time (sometimes faster/slower)
•A buﬀers allow more than one element to be "in communication" at the same time.
•This potentially reduces the amount of time either program A or B has to wait for the other.
•It is particularly eﬀective if A and/or B produce/consume data in bursts.
•Buﬀers are built using queues, because:
–In order to preserve communication order, data must be removed from the buﬀer in the same
order in which it was put in.
–We aren’t allowed to read from an empty buﬀer, or write to a full buﬀer.
•Buﬀers are used anywhere the ﬂow of data needs to be “smoothed”:
–data transfer (“input/output") within computer system (e.g., console input&output, ﬁle input&output)
2.2 Process Scheudling
•Queues are very common in operating systems for process scheduling in multiprocessing systems.
•What is a multiprocessing system?
–Multiple programs all running "at the same time", sharing the same CPU(s).
–If there is one CPU and several programs, are they really all running at the same time?
•What is process scheduling?
–When, and for how long, (and on which processor) does a program get to run?
–In other words, process scheduling seeks to share limited CPU resources between multiple processes
in a fair way.
•Operating systems manage processes by storing all of the information about a process in a structure
called a process control block (PCB).
•In the simplest form of scheduling, the process control blocks are placed on a queue. When a PCB
gets to the front of the queue, that process gets to run.
•New processes are enqueued on the runnable queue.
•When a process makes it to the front of the runnable queue, it runs for a ﬁxed amount of time, or until
it releases the CPU on its own (whichever comes ﬁrst).
•A process may release the CPU early if it terminates, or enters an I/O wait state, etc.
•A process that uses its full time gets re-enqueued on the tail of the runnable queue.
•Processes that stop to wait for I/O are enqueued on an I/O wait queue (depending on kind of I/O).
•Processes that are dequeued from an I/O wait queue (because their I/O is done) are enqueued on an
auxilliary runnable queue. This queue has priority over the normal runnable queue.