Added osc
Some checks failed
Build and Deploy MkDocs / deploy (push) Failing after 15s

This commit is contained in:
John Gatward
2026-03-25 11:15:39 +00:00
parent abc8b2452b
commit 1af40d53d8
70 changed files with 2553 additions and 0 deletions

View File

@@ -0,0 +1,128 @@
08/10/20
---
A process consists of two **fundamental** units
1. Resources
- A logical address space containing the process image (program, data, heap, stack)
- Files, I/O devices, I/O channels
2. Execution trace e.g. an entity that gets executed
A process can share its resources between multiple execution traces, e.g multiple threads running in the same resource environment.
![Image](/lectures/osc/assets/9.png)
Every thread has its own *execution context* (e.g. program counter, stack, registers).
All threads have **access** to the process' **shared resources**
>e.g. Files; if one thread opens a file then all threads have access to it
>
>Same with global variables, memory etc
Similar to processes, threads have:
**States**, **transitions** and a **thread control block**
![Image](/lectures/osc/assets/a.png)
The *registers*, *stack* and *state* are all specific to the registers. When a context switch occurs they must be stored in the **thread control block**.
Threads incur less overhead to create/terminate/switch processes. This is because the address space remains the same for threads of the same process.
>When switching from thread A to thread B, the computer doesn't need to worry about updating the memory management unit as they're using the same memory layout.
>
>This makes switching threads very quick
Some CPU's have direct **hardware support** for **multi-threading**.
>With hyper threading and multi-threading, the thread's execution context isn't saved to the thread control block. Instead the CPU stops using one thread and starts using another.
>
>This decreases overhead as the execution context doesn't need to be saved and reloaded.
1. **Inter-thread communication** is easier and faster that **inter-process** communication (threads share memory by default)
2. **No protection boundaries** are required in the address space (threads are cooperating, they belong to the same user and have the same goal)
3. Synchronisation has to be considered carefully.
If you opened word and excel, you wouldn't want them running on threads as you don't want word to have access to the memory excel is accessing. However if you just had word open the spell check and graphics libraries would all run on threads as they work towards a common goal.
### Why use threads
1. Multiple **related activities** apply to the **same resources**, these resources should be accessible.
2. Processes will often contain multiple **blocking tasks**
1. I/O operations (thread blocks, interrupt marks completion)
2. Memory access: pages faults are result in blocking
Such activities should be carried out in parallel on threads. e.g. web-servers, word processors, processing large data volumes etc
**User** threads - happen inside the user space, the OS doesn't need to do anything.
>**Thread management** (creating, destroying, scheduling, thread control block manipulation) is carried out in user space with the help of a user library.
>
>The process maintains a thread table managed by the run-time system without the kernel's knowledge (similar to a process table and used for thread switching)
**Kernel** threads - ask the OS to create a tread for the user and give it to the user.
**Hybrid** implementations - is what is used in windows 10
![Image](/lectures/osc/assets/d.png)
**Pros and cons of user threads**
| Pros | Cons |
| ----------- | ----------- |
| Threads in user space don't require mode switches | Blocking system calls suspend all running threads |
| Full control over the thread scheduler | No true parallelism (the processes still scheduled on a single CPU) |
| OS independent | Clock interrupts (user threads are non-preemptive) |
| - | Page faults result in blocking the process|
The user threads don't share the memory management unit therefore if a thread tries to access memory that isn't loaded in the MMU then a page fault will occur, these occur often.
**Kernel Threads**
The kernel manages the threads, user application accesses threading facilities through **API** and **system calls**
>The **thread table** is in the kernel, containing the thread control blocks.
>
>If a thread blocks, the kernel chooses a thread from the same or different process.
Advantages:
>**True parallelism** can be achieved
>No run time system needed
However frequent **mode switches** take place, resulting in a lower performance.
![Image](/lectures/osc/assets/e.png)
Kernel threads are slower to create and sync that user level however user level cannot exploit parallelism.
**Hybrid Implementation**
>User threads are **multiplexed** onto kernel threads
>
>Kernel sees and schedules the kernel threads
>
>User application sees user threads and creates/schedules these (an unrestricted number)
![Image](/lectures/osc/assets/f.png)
Thread libraries provide an API for managing threads
Thread libraries can be implemented
>Entirely in user space (user threads)
>
>Based off system calls (rely on the kernel)
Examples of thread APIs include **POSIX PThreads**, windows threads and Java threads
`pthread_create` - Create new thread
`pthread_exit` - Exit existing thread
`pthread_join` - Wait for thread with ID
`pthread_yield` - Release CPU
`pthread_attr_init` - Thread Attributes (e.g. priority)
`pthread_attr_destroy` - Release Attributes
$ ~ man `pthread_create` returns the help page
![img](/lectures/osc/assets/g.png)
```
$ ~ HELLO from thread 10
$ ~ HELLO from thread 10
$ ~ HELLO from thread 10
etc
```
This is because by the time the thread is created `i` has already iterated to 10.
You cannot guarantee the first thread you create will be the first to run.