Files
notes/docs/lectures/osc/11_mem_management3.md
John Gatward 1af40d53d8
Some checks failed
Build and Deploy MkDocs / deploy (push) Failing after 15s
Added osc
2026-03-25 11:15:39 +00:00

111 lines
5.4 KiB
Markdown

05/11/20
## Dynamic Partitioning Management
### Allocating Available Memory
#### First Fit
> * First fit starts scanning **from the start** of the linked list until a link is found, which can fit the process
> * If the requested space is **the exact same size** as the 'hole', all the space is allocated
> * Otherwise the free link is split into two:
> * The first entry is set to the **size requested** and marked **used**
> * The second entry is set to **remaining size** and **free**.
#### Next Fit
> * The next fit algorithm maintains a record of where it got to last time and restarts it's search from there
> * This gives an even chance to all memory to get allocated (first fit concentrates on the start of the list)
> * However simulations have been run which show that this is worse than first fit.
> * This is because a side effect of **first fit** is that it leaves larger partitions towards the end of memory, which is useful for larger processes.
#### Best Fit
> * The best fit algorithm always **searches the entire linked list** to find the smallest hole that's big enough to fit the memory requirements of the process.
> * It is **slower** than first fit
> * It also results in more wasted memory. As a exact sized hole is unlikely to be found, this leaves tiny (and useless) holes.
>
> Complexity: $O(n)$
#### Worst Fit
> Tiny holes are created when best fit split an empty partition.
>
> * The **worst fit algorithm** finds the **largest available empty partition** and splits it.
> * The **left over partition** is hopefully **still useful**
> * However simulations show that this method **isn't very good**.
>
> Complexity: $O(n)$
#### Quick Fit
> * Quick fit maintains a **list of commonly used sizes**
> * For example a separate list for each of 4 Kb, 8 Kb, 12 Kb etc holes
> * Odd sized holes can either go into the nearest size or into a special separate list.
> * This is much f**aster than the other solutions**, however similar to **best fit** it creates **many tiny holes**.
> * Finding neighbours for **coalescing** (combining empty partitions) becomes more difficult & time consuming.
### Coalescing
Coalescing (join together) takes place when **two adjacent entries** in the linked list become free.
* Both neighbours are examined when a **block is freed**
* If either (or both) are also **free** then the two (or three) **entries are combined** into one larger block by adding up the sizes
* The earlier block in the linked list gives the **start point**
* The **separate links are deleted** and a **single link inserted**.
### Compacting
Even with coalescing happening automatically, **free blocks** may still be **distributed across memory**
> * Compacting can be used to join free and used memory
> * However compacting is more **difficult and time consuming** to implement then coalescing.
> * Each **process is swapped** out & **free space coalesced**.
> * Processes are swapped back in at lowest available location.
## Paging
Paging uses the principles of **fixed partitioning** and **code re-location** to devise a new **non-contiguous management scheme**
> * Memory is split into much **smaller blocks** and **one or multiple blocks** are allocated to a process (e.g. a 11 Kb process would take 3 blocks of 4 Kb)
> * These blocks **do not have to be contiguous in main memory**, but **the process still perceives them to be contiguous**
> * Benefits:
> * **Internal fragmentation** is reduced to the **last block only** (e.g. previous example the third block, only 3 Kb will be used)
> * There is **no external fragmentation**, since physical blocks are **stacked directly onto each other** in main memory.
![Paging in main memory](/lectures/osc/assets/L.png)
![Paging processor's view](/lectures/osc/assets/M.png)
![Paging](/lectures/osc/assets/P.png)
A **page** is a **small block** of **contiguous memory** in the **logical address space** (as seen by the process)
* A **frame** is a **small contiguous block** in **physical memory**.
* Pages and frames (usually) have the **same size**:
* The size is usually a power of 2.
* Size range between 512 bytes and 1 Gb. (most common 4 Kb pages & frames)
**Logical address** (page number, offset within page) needs to be **translated** into a **physical address** (frame number, offset within frame)
* Multiple **base registers** will be required
* Each logical page needs a **separate base register** that specifies the start of the associated frame
* i.e a **set of base registers** has to be maintained for each process
* The base registers are stored in the **page table**
![Mapping page tables](/lectures/osc/assets/Q.png)
The page table can be seen as a **function**, that **maps the page number** of the logical address **onto the frame number** of the physical address
$$
frameNumber = f(pageNumber)
$$
* The **page number** is used as an **index to the page table** that lists the **location of the associated frame**.
* It is the OS' duty to maintain a list of **free frames**.
![address translation](/lectures/osc/assets/R.png)
We can see that the **only difference** between the logical address and physical address is the **4 left most bits** (the **page number and frame number**). As **pages and frames are the same size**, then the **offset value will be the same for both**.
This allows for **more optimisation** which is important as this translation will need to be **run for every memory read/write** call.