30/10/20 ## Relocation and Protection ```c #include int iVar = 0; int main() { int i = 0; while(i < 10) { iVar++; sleep(2); printf("Address:%x; Value:%d\n",&iVar, iVar); i++; } return 0; } //If running the code twice (simultaneously): //Will the same or different addresses be displayed for iVar? //Will the value for iVar in the first run influence the value for iVar in the second run? ``` The addresses will be the same, as memory management within a process is the same. The process doesn't know where it is in memory, however the process is allocated the same amount of memory and the address is relative to the process. If the process is run twice, they are allocated two different memory spaces, so the addresses will be the same. [explanation 8:05] ## Relocation When a program is run, it does not know in advance which partition it will occupy. * The program **cannot** simply **generate static addresses** (like jump instructions) that are absolute * **Addresses should be relative to where the program has been loaded**. * Relocation must be **solved in an operating system** that allows **processes to run at changing memory locations**. **Protection**: Once you can have two programs in memory at the same time, protection must be enforced. ![relative_mmu](/lectures/osc/assets/F.png) **Logical Address**: is a memory address seen by the process * It is independent of the current physical memory assignment * It is relative to the start of the program **Physical address**: refers to an actual location in main memory The **logical address space** must be **mapped** onto the **machines physical address space**. ### Static Relocation This happens at compile time, a process has to be located at the same location every single time (impractical) ### Dynamic Relocation This happens at load time * An **offset is added to every logical address** to account for its physical location in memory. * **Slows down the loading** of a process, does not account for **swapping** ### Dynamic Relocation at run-time Two special purpose registers are maintained in the CPU (the **MMU**) containing a **base address** and **limit** > * The **base register** stores the **start address** of the partition. > * The **limit register** holds the **size** of the partition. > > At **run-time** > > * The base register is added to the **logical (relative) address** to generate the physical address. > * The resulting address is **compared** against the **limit register**. This allows us to see the bounds of where the process exists. > > NOTE: This requires **hardware support** (which didn't exist in the early days). registers #### Dynamic Partitioning **Fixed partitioning** results in **internal fragmentation**: > An exact match between the requirements of the process and the available partitions **may not exist**. > > * This means the partition may **not be used in its entirety**. **Dynamic Partitioning** > * A **variable number of partitions** of which the **size** and **starting address** can **change over time**. > * A process is allocated the **exact amount** of **contiguous memory it requires**, thereby preventing internal fragmentation. ![dynamic_partitioning](/lectures/osc/assets/H.png) **Swapping** holds some of the **processes** on the drive and **shuttles processes** between the drive and main memory as necessary Reasons for **swapping**: > * Some processes only **run occasionally**. > * We have more **processes** than **partitions**. > * A process's **memory requirements** may have **changed**. > * The **total amount of memory that is required** for the process **exceeds the available memory**. For any given process, we might not know the exact **memory requirements**. This is because processes may involve dynamic parts. > Solution (??): > > Allocate the current requirements **AND** a 'bit extra' > > The 'bit extra' will try and account for the dynamic nature of the process. > > If the process out grows it's partition, then it is shuttled out onto the main disk and allocated a new partition. ![External Fragmentation](assets/I.png) ##### External Fragmentation > * Swapping a process out of memory will **create 'a hole'** > * A new process may not **use the entire 'hole'**, leaving a small **unused block** > * A new process may be **too large for a given 'hole'** > > The **overhead** of memory **compaction** to **recover holes** can be **prohibitive** and requires **dynamic relocation**. ##### Allocation Structures **Bitmaps**: > * The simplest data structure that can be used is a **bitmap**. > * **Memory is split into blocks** of 4 Kb size. > * A bitmap is set up so that each **bit is 0** if the memory block is free, and 1 if the **block is being used** > * 32 Mb memory / 4 Kb blocks = 8192 bitmap entries > * 8192 bits occupy 1 Kb of storage (8192 / 8) > * The size of this bitmap will depend on the **size of the memory** and the **size of the allocation unit**. > * To find a hole of say 128 K, then a group of **32 adjacent bits set to 0** must be found > * Typically a long operation, the longer it takes, the lower the CPU utilisation is. > * A **trade-off exists** between the **size of the bitmap** and the **size of the blocks** > * The size of the bitmaps can become prohibitive for small blocks and may make searching the bitmap slower > * Larger blocks may increase internal fragmentation. > * **Bitmaps are rarely used** because of this trade off **Linked List**: A more **sophisticated data structure** is required to deal with a **variable number** of **free and used partitions**. > * A linked list consists of a **number of entries** (links) > * Each link **contains data items** e.g. **start of memory block**, **size** and a flag for free and allocated > * It also contains a pointer to the next link. ![Memory Management with linked lists](/lectures/osc/assets/J.png) ![Linked lists for finding free memory](/lectures/osc/assets/K.png)