uocis  CIS 415 Operating Systems - Spring 2003

Discussion: Week 10 session 2

Solutions

Banker's Algorithm

Current Allocation:
R1R2R3R4R5R6
P1010201
P2000101
P3001235
P4010000
P5111100
P6001000
Maximum Request:
R1R2R3R4R5R6
P1130625
P2033725
P3123456
P4010100
P5121200
P6022500
Still needs:
R1R2R3R4R5R6
P1120424
P2033624
P3122221
P4000100
P5010100
P6021500
Total System Resources:
R1R2R3R4R5R6
134758

Available system resources:
R1R2R3R4R5R6
001121

Is this state safe? If so, give a possible safe execution order. If not, list the processes that deadlock.
Yes - Here is a sequence of execution, tracing the available resources vector:

P4 completes - 0 1 1 1 2 1
P5 completes - 1 2 2 2 2 1
P3 completes - 1 2 3 4 5 6
P1 completes - 1 3 3 6 5 7
P2 completes - 1 3 3 7 5 8
P6 completes - 1 3 4 7 5 8

Can we safely allocate 1ŚR3 to P2? Why or why not?
No - P4 and P5 can execute, but none of the others can run now that there aren't enough of R3 to run P3.

Virtual memory

Suppose we have the following memory and processes:

  • Memory capacity: 8 frames
  • Process P1: 3 pages, initially loaded in frames 0..2
  • Process P2: 5 pages, initially loaded in frames 3..7
  • Process P3: 4 pages, completely swapped out for now.
Reference string (pages): P2.1 P3.0 P2.0 P2.3 P3.1 P1.1 P1.2 P2.2 P2.1

Trace the references using the LRU, CLOCK and OPT policies, indicating each reference with a checkmark and each page fault with a star at the top of the column and the new page number in the correct frame beneath. Use the lowest frame number to break a tie. The CLOCK pointer starts at frame 5.

LRU (a '#' means recently used - assume higher frame numbers are more recent):
I will use an X to show a memory reference.

            *        *    *    *    *
0 P1.0#             P3.1
1 P1.1     P3.0
2 P1.2#                  P1.1
3 P2.0          X
4 P2.1   X                              X
5 P2.2#                       P1.2
6 P2.3#           X
7 P2.4#                            P2.2
Page fault rate: 5/9 = 56%
CLOCK ( a '#' means the use bit is set):
I will use an O to indicate resetting a use bit.

                *           *     *        *
  0 P1.0#         O               ->     P2.2#
  1 P1.1      P3.0#                        ->
  2 P1.2#       ->            O       X#
  3 P2.0            X#        O
  4 P2.1   X#                 O                X#
->5 P2.2#         O       P3.1#
  6 P2.3#         O    X#   ->      O
  7 P2.4#         O             P1.1#
Page fault rate: 4/9 = 44%
OPT:

           *        *
0 P1.0    P3.0     P3.1
1 P1.1                  X
2 P1.2                    X
3 P2.0         X
4 P2.1  X                     X
5 P2.2                      X
6 P2.3           X
7 P2.4
Page fault rate: 2/9 = 22%

DMA

The slides showed three examples of DMA configurations:

  1. DMA, CPU and I/O devices are on the same bus
  2. I/O devices are each attached to one of n DMA controllers, which are attached to the CPU/memory bus. One DMA controller may serve more than one device.
  3. There are two buses - one for I/O devices, and one for the CPU and memory. A single DMA controller bridges the buses.
Which style steals the most bus cycles from the CPU?
Style #1 steals 2 cycles every word transferred. The others steal only one per word. However, style 2 can have multiple transfers going at once, collectively taking a higher portion of the cycles.

Which style is best for multiple I/O devices that may transfer large blocks at the same time?
Style 2, with its multiple independent controllers, can handle multiple transfers at once with a minimum number of cycles per word transferred.

Buffering

Can you think of an application for which unbuffered I/O makes sense?
Applications for which blocking degrades performance or which handle I/O at any time.
Example: Games (especially reading input devices)

What advantage does single buffering have over unbuffered I/O?
This is where DMA comes in. Single buffering allows a transfer to be done by the DMA controller rather than the CPU, freeing the CPU to service other processes during the period where the I/O is progressing.
Example: Any data transfer to/from disk.

What advantage does double buffering have over single buffering?
The program can be processing the data at the same time as more is coming in. This is generally done with applications that transfer a stream of data rather than a fixed-length block, so there is no blocking request.
Examples: Computer games (drawing one frame while displaying another), sound mixers (mixing one buffer while playing another), sound recorders (saving one buffer while sampling another).

What advantage does circular buffering have over double buffering?
Circular buffering does not speed up transfers per se, because it still takes the same amount of time to read/process data. The attraction is that circular buffers can handle bursts, where the I/O momentarily comes faster than the program can handle and overruns the current buffer. The more buffers available, the larger the burst that can be handled.
Example: Network drivers (sort of) - packets can come in bursts.

File Storage

How many levels of indexing are required to store a 2GB file if blocks are 1K in length and block pointers are 4 bytes long? Use 1GB = 230 bytes and 1K = 210 bytes.
The data in the file occupies 230 / 210 = 220 blocks.
Each index block can index 210 / 4 = 28 blocks. Adding a level multiplies this number by 28.
It takes 3 levels to address at least 220 blocks (28*28 < 220 and 28*28*28 > 220).

Created by: Tim Singer June 3, 2003