Files
notes/docs/lectures/osc/03_processes4.md
John Gatward 1af40d53d8
Some checks failed
Build and Deploy MkDocs / deploy (push) Failing after 15s
Added osc
2026-03-25 11:15:39 +00:00

5.7 KiB
Raw Blame History

09/10/20

Multi-level scheduling algorithms

Nothing is stopping us from using different scheduling algorithms for individual queues for each different priority level.

  • Feedback queues allow priorities to change dynamically i.e. jobs can move between queues 1. Move to lower priority queue if too much CPU time is used 2. Move to higher priority queue to prevent starvation and avoid inversion of control.

Exam 2013: Explain how you would prevent starvation in a priority queue algorithm?

alt text

The solution to this is to momentarily boost thread A's priority level, this will let A do what it what's to do and release resource X so that B and C can run.

Priority boosting helps avoid control inversion.

Defining characteristics of feedback queues

  1. The number of queues
  2. The scheduling algorithms used for individual queues
  3. Migration policy between queues
  4. Initial access to the queues

Feedback queues are highly configurable and offer significant flexibility.

Windows 7

An interactive system using a pre-emptive scheduler with dynamic priority levels.

Two priority classes with 16 different priority levels exist.

  1. Real time processes/threads have a fixed priority level. (These are the most important)
  2. Variable processes/threads can have their priorities boosted temporarily.

A round robin is used within the queues.

alt text

alt text

If you give a couple of the threads the highest priority level, you can freeze your computer. (causes starvation for low priority threads)

Scheduling in Linux

Process scheduling has evolved over different versions of Linux to account for multiple processors/cores, processor affinity, and load balancing between cores.

Linux distinguishes between two types of tasks for scheduling:

  1. Real time tasks (to be POSIX compliant)
    1. Real time FIFO tasks
    2. Real time Round Robin tasks
  2. Time sharing tasks using a pre-emptive approach (similar to variable in Windows)

The most recent scheduling algorithm in Linux for time sharing tasks is the completely fair scheduler

Real time FIFO have the highest priority and are scheduled with a FCFS approach using a pre-emption if a higher priority job shows up.

Real time round robin tasks are preemptable by clock interrupts and have a time slice associated with them.

Both ways cannot guarantee hard deadlines.

Time sharing tasks

The CFS (completely fair scheduler) divides the CPU time between all processes and threads.

If all N processes/threads have the same priority.

They will be allocated a time slice equal to 1/N times the available CPU time.

The length of the time slice and the available CPU time are based on the targeted latency (every process/thread should run at least once in this time)

If N is very large, the context switch time will be dominant, hence a lower bound on the time slice is imposed by the minimum granularity.

A process/thread's time slice can be no less than the minimum granularity.

A weighting scheme is used to take difference priorities into account.

alt text

The tasks with the lowest proportional amount of "used CPU time" are selected first. (Shorter tasks picked first if Wi is the same).

Shared Queues

A single of multi-level queue shared between all CPUs

Pros Cons
Automatic load balancing Contention for the queues (locking is needed)
Cache becomes invalid when moving to a different CPU

Windows will allocate the highest priority threads to the individual CPUs/cores.

Private Queues

Each CPU has a private (set) of queues

Pros Cons
CPU affinity is automatically satisfied Less load balancing (one queue could have 1000 tasks while the other has 4)
Contention for shared queue is minimised

Related vs. Unrelated threads

Related: multiple threads that communicated with one another and ideally run together

Unrelated processes threads that are independent, possibly started by different users running different programs.

alt text

Threads belong to the same process are are cooperating e.g. they exchange messages or share information

The aim is to get threads running as much as possible, at the same time across multiple CPUs.

Space Sharing

N threads are allocated to N dedicated CPUs

N threads are kept waiting until N CPUs are available

Non pre-emptive i.e. blocking calls result in idle CPUs

N can be dynamically adjusted to match processor capacity.

Gang Scheduling

Time slices are synchronised and the scheduler groups threads together to run simultaneously

A pre-emptive algorithm

Blocking threads result in idle CPU (If a thread blocks, the rest of the time slice will be unused due the time slice synchronisation across all CPUs)