Files
notes/docs/lectures/osc/09_mem_management1.md
John Gatward 7e3dcd85ba test
2026-03-25 12:34:28 +00:00

5.7 KiB

29/10/20

Memory Overview

Computers typically have memory hierarchies:

  • Registers
  • L1/L2/L3 cache
  • Main memory (RAM)
  • Disks

Higher Memory is faster, more expensive and volatile. Lower Memory is slower, cheaper and non-volatile.

The operating system provides memory abstraction for the user. Otherwise memory can be seen as one linear array of bytes/words.

OS Responsibilities

  • Allocate/de-allocate memory when requested by processes, keep track of all used/unused memory.
  • Distribute memory between processes and simulate an indefinitely large memory space. The OS must create the illusion of having infinite main memory, processes assume they have access to all main memory.
  • Control access when multi programming is applied.
  • Transparently move data from memory to disk and vice versa.

Partitioning

img

Contiguous memory management

Allocates memory in one single block without any holes or gaps.

Non-contiguous memory management models

Where memory is allocated in multiple blocks, or segments, which may not be placed next to each other in physical memory.

Mono-programming: one single partition for user processes.

Multi-programming with fixed partitions

  • Fixed equal sized partitions
  • Fixed non-equal sized partitions

Multi-programming with dynamic partitions

Mono-programming

  • Only one single user process is in memory/executed at any point in time.

  • A fixed region of memory is allocated to the OS & kernal, the remaining memory is reserved for a single process

  • This process has direct access to physical memory (no address translation takes place)

  • Every process is allocated contiguous block memory (no holes or gaps)

  • One process is allocated the entire memory space and the process is always located in the same address space.

  • No protection between different user processes required. Also no protection between the running process and the OS, so sometimes that process can access pieces of the OS its not meant to.

  • Overlays enable the programmer to use more memory than available.

img

Short comings of mono-programming
  • Since a process has direct access to the physical memory, it may have access to the OS memory.
  • The OS can be seen as a process - so we have two processes anyway.
  • Low utilisation of hardware resources (CPU, I/O devices etc)
  • Mono-programming is unacceptable as multi-programming is excepted on modern machines

Direct memory access and mono-programming are common in basic embedded systems and modern consumer electronics eg washing machines, microwaves, cars etc.

Simulating Multi-Programming

We can simulate multi-programming through swapping

  • Swap process out to the disk and load a new one (context switches would become time consuming)

Why Multi-Programming is better theoretically

  • There are n processes in memory
  • A process spends p percent of its time waiting for I/O
  • CPU Utilisation is calculated as 1 minus the time that all processes are waiting for I/O
  • The probability that all n **processes are waitying for I/O is p^n^
  • Therefore CPU utilisation is given by 1 - p^{n}

cpu_util_form

With an I/O wait time of 20% almost 100% CPU utilisation can be achieved with four processes (1-0.2^{4})

With an I/O wait time of 90%, 10 processes can achieve about 65% CPU utilisation. (1-0.9^{10})

CPU utilisation goes up with the number of processes and down for increasing levels of I/O.

cpu_util_graph

Assume that:

  • A computer has one megabyte of memory
  • The OS takes up 200k, leaving room for four 200k processes

Then:

  • If we have an I/O wait time of 80%, then we will achieve just under 60% CPU utilisation (1-0.8^4^)
  • If we add another megabyte of memory, it would allow us to run another five processes. We can now achieve about 87% CPU utilisation (1-0.8^9^)
  • If we add another megabyte of memory (14 processes) we find that CPU utilisation will increase to around 96%
Caveats
  • This model assumes that all processes are independent, this is not true.
  • More complex models could be built using queuing theory but we still use this simplistic model to make approximate predictions

Fixed Size Partitions

  • Divide memory into static, contiguous and equal sized partitions that have a fixed size and location.
    • Any process can take any partition. (as long as its large enough)
    • Allocation of fixed equal sized partitions to processes is trivial
    • Very little overhead and simple implementation
    • The OS keeps a track of which partitions are being used and which are free.
Disadvantages
  • Partition may be necessarily large
    • Low memory utilisation
    • Internal fragmentation
  • Overlays must be used if a program does not fit into a partition (burden on the programmer)

Fixed Partitions of non-equal size

  • Divide memory into static and non-equal sized partitions that have fixed size and location
    • Reduces internal fragmentation
    • The allocation of processes to partitions must be carefully considered.

process_alloc

One private queue per partition:

  • Assigns each process to the smallest partition that it would fit in.
  • Reduces internal fragmentation.
  • Can reduce memory utilisation (e.g. lots of small jobs result in unused large partitions)

A single shared queue:

  • Increased internal fragmentation as small processes are allocated into big partitions.