177 lines
7.7 KiB
Markdown
177 lines
7.7 KiB
Markdown
13/11/20
|
|
|
|
## Virtual Memory & Potential Problems
|
|
|
|
#### Page Replacement
|
|
|
|
##### Second chance
|
|
|
|
> * If a page at the front of the list has **not been referenced** it is **evicted**
|
|
> * If the reference bit is set, the page is **placed at the end** of the list and it's reference bit is unset.
|
|
> * This works better than FIFO and is relatively simple
|
|
> * **Costly to implement** as the list is constantly changing.
|
|
> * Can degrade to FIFO if all pages were initially referenced.
|
|
|
|
##### Clock Replacement Algorithm
|
|
|
|
> The second chance implementation can be improved by **maintaining the page list as a circle**
|
|
>
|
|
> * A **pointer** points to the last visited page.
|
|
> * In this form the algorithm is called the one handed clock
|
|
> * It is faster, but can still be **slow if the list is long**.
|
|
> * The **time spent** on **maintaining** the list is **reduced**.
|
|
|
|

|
|
|
|
##### Not Recently Used (NRU)
|
|
|
|
> For NRU, **referenced** and **modified** bits are kept in the page table
|
|
>
|
|
> * Referenced bits are set to 0 at the start, and **reset periodically**
|
|
>
|
|
> There are four different **page types** in NRU:
|
|
>
|
|
> 1. Not referenced recently, not modified
|
|
> 2. Not referenced recently, modified
|
|
> 3. Referenced recently, not modified
|
|
> 4. Referenced recently, modified
|
|
>
|
|
> **Page table entries** are inspected upon every **page fault**. This could be implemented in the following way
|
|
>
|
|
> 1. Find a page from **class 0** to be removed.
|
|
> 2. If step 1 fails, scan again looking for **class 1**. During this scan we set the reference bit to 0 on each page that is bypassed
|
|
> 3. If step 2 fails, start again from step 1 (Now we should find pages from class 2&3 have been moved to class 0 or 1)
|
|
>
|
|
> The NRU algorithm provides a **reasonable performance** and is easy to understand and implement.
|
|
|
|
##### Least Used Recently
|
|
|
|
> Least recently used **evicts the page** that has **not be used for the longest**
|
|
>
|
|
> * The OS must keep track of when a page was last used.
|
|
> * Every page table entry contains a field for the counter
|
|
> * This is **not cheap to implement** as we need to maintain a **list of pages** which are **sorted** in the order in which they have been used.
|
|
>
|
|
> This algorithm can be **implemented in hardware** using a **counter** that is incremented after each instruction ...
|
|
|
|

|
|
|
|
This will look familiar to the FIFO algorithm however, when a page is used, that is like its just come in.
|
|
|
|
|
|
### Resident Set
|
|
|
|
How many pages should be allocated to individual processes:
|
|
|
|
* **Small resident sets** enable to store **more processes in memory** => improved CPU utilisation.
|
|
* **Small resident sets** may result in **more page faults**
|
|
* **Large resident sets** may **no longer reduce** the **page fault rate** (**diminishing returns**)
|
|
|
|
A trade-off exists between the **sizes of the resident sets** and **system utilisation**.
|
|
|
|
Resident set sizes may be **fixed** or **variable** (adjusted at run-time)
|
|
|
|
* For **variable sized** resident sets, **replacement policies** can be:
|
|
* **Local**: a page of the same process is replaced
|
|
* **Global**: a page can be taken away from a **different process**
|
|
* Variable sized sets require **careful evaluation of their size** when a **local scope** is used (often based on the **working set** or the **page fault rate**)
|
|
|
|
### Working Set
|
|
|
|
The **resident set** comprises the set of pages of the process that are in memory (they have a corresponding frame)
|
|
|
|
The **working set** is a subset of the resident set that is actually needed for execution.
|
|
|
|
* The **working set** $W(t, k)$ comprises the set of referenced pages in the last $k$ (working set window) **virtual time units for the process**.
|
|
* $k$ can be defined as **memory references** or as **actual process time**
|
|
* The set of most recent used pages
|
|
* The set of pages used within a pre-specified time interval
|
|
* The **working set size** can be used as a guide for the number of frames that should be allocated to a process.
|
|
|
|

|
|
|
|
The working set is a **function of time** $t$:
|
|
|
|
* Processes **move between localities**, hence, the pages that are included in the working set **change over time**
|
|
* **Stable** intervals alternate with intervals of **rapid change**
|
|
|
|
$|W(t,k)|$ is then a variable in time. Specifically:
|
|
$$
|
|
1\le |W(t,k)| \le min(k, N)
|
|
$$
|
|
where $N$ is the total number of pages of the process. All the maths is saying is that the size of the working set can be as small as **one** or as large as **all the pages in the process**.
|
|
|
|
Choosing the right value for $k$ is important:
|
|
|
|
* Too **small**: inaccurate, pages are missing
|
|
* Too **large**: too many unused pages present
|
|
* **Infinity**: all pages of the process are in the working set
|
|
|
|
Working sets can be used to guide the **size of the resident sets**
|
|
|
|
* Monitor the working set
|
|
* Remove pages from the resident set that are not in the working set
|
|
|
|
The working set is costly to maintain => **page fault frequency (PFF)** can be used as an approximation: $PFF\space\alpha\space k$
|
|
|
|
* If the PFF is increased -> we need to increase $k$
|
|
* If PFF is very low -> we could decrease $k$ to allow more processes to have more pages.
|
|
|
|
#### Global Replacement
|
|
|
|
> Global replacement policies can select frames from the entire set (they can be taken from other processes)
|
|
>
|
|
> * Frames are **allocated dynamically** to processes
|
|
> * Processes cannot control their own page fault frequency. The PFF of one process is **influenced by other processes**.
|
|
|
|
#### Local Replacement
|
|
|
|
> Local replacement policies can only select frames that are allocated to the current process
|
|
>
|
|
> * Every process has a **fixed fraction of memory**
|
|
> * The **locally oldest page** is not necessarily the **globally oldest page**
|
|
|
|
Windows uses a variable approach with local replacement. Page replacement algorithms can use both policies.
|
|
|
|
|
|
### Paging Daemon
|
|
|
|
It is more efficient to **proactively** keep a number of **free pages** for **future page faults**
|
|
|
|
* If not, we may have to **find a page** to evict and we **write it to the drive** (if its been modified) first when a page fault occurs.
|
|
|
|
Many systems have a background process called a **paging daemon**.
|
|
|
|
* This process **runs at periodic intervals**
|
|
* It inspects the state of the frames and if too few frames are free, it **selects pages to evict** (using page replacement algorithms)
|
|
|
|
Paging daemons can be combined with **buffering** (free and modified lists) => write the modified pages **but keep them in main memory** when possible.
|
|
|
|
**Buffering**: a process that preemptively writes modified pages to the disk. That way when there's a page fault we don't lose the time taken to write to disk
|
|
|
|
|
|
### Thrashing
|
|
|
|
Assume **all available pages are in active use** and a new page needs to be loaded:
|
|
|
|
* The page that will be evicted will have to be **reloaded soon afterwards**
|
|
|
|
**Thrashing** occurs when pages are **swapped out** and then **loaded back in immediately**
|
|
|
|
#### Causes of thrashing include:
|
|
|
|
* The degree of multi-programming is too high i.e the total **demand** (the sum of all working sets sizes) **exceeds supply** (the available frames)
|
|
* An individual process is allocated **too few pages**
|
|
|
|
This can be prevented by **using good page replacement algorithms**, reducing the **degree of multi-programming** or adding more memory.
|
|
|
|
The **page fault frequency** can be used to detect that a system is thrashing.
|
|
|
|
> * CPU utilisation is too low => scheduler **increases degree of multi-programming**
|
|
> * Frames are allocated to new processes and taken away from existing processes
|
|
> * I/O requests are queued up as a consequence of page faults
|
|
>
|
|
> This is a positive reinforcement cycle.
|
|
|
|
And when all this comes together, its how memory management working in modern computers.
|