Files
notes/docs/lectures/osc/06_concurrency3.md
John Gatward 1af40d53d8
Some checks failed
Build and Deploy MkDocs / deploy (push) Failing after 15s
Added osc
2026-03-25 11:15:39 +00:00

7.7 KiB

16/10/20

Mutexes

Mutexes are an approach for mutual exclusion provided by the operating system containing a Boolean lock variable to indicate availability.

The lock variable is set to true if the lock is available

Two atomic functions are used to manipulate the mutex

  1. acquire() - called before entering a critical section, Boolean set to false
  2. release() - called after exiting the critical section, Boolean set to true again.
acquire() {
	while(!available); // busy wait
	available = false;
}
release() { 
    available = true; 
}

acquire() and release() must be atomic instructions .

  • No interrupts should occur between reading and setting the lock.
  • If interrupts can occur, the follow sequence could occur.
T_i => lock available
... 								T_j => lock available
... 								T_j sets lock
T_i sets lock 						...

The process/thread that acquires the lock must release the lock - in contrast to semaphores.

Pros Cons
Context switches can be avoided. Calls to acquire() result in busy waiting. Shocking performance on single CPU systems.
Efficient on multi-core systems when locks are held for a short time. A thread can waste it's entire time slice busy waiting.

img

Semaphores

Semaphores are an approach for mutual exclusion and process synchronisation provided by the operating system.

  • They contain an integer variable
  • We distinguish between binary (0-1) and counting semaphores (0-N)

Two atomic functions are used to manipulate semaphores**

  1. wait() - called when a resource is acquired the counter is decremented.
  2. signal() / post() is called when the resource is released.

Strictly positive values indicate that the semaphore is available, negative values indicate the number of processes/threads waiting.

//Definition of a Semaphore
typedef struct {
	int value;
	struct process * list;
} semaphore;
wait(semaphore * S) {
	S->value--;
	if(S->value < 0) {
		add process to S->list
		block(); // system call
    }
}
post(semaphore * S) {
	S->value++;
	if (S->value <= 0) {
        remove a process P from S->list;
        wakeup(P); // system call
	}
}
Thread N
...
...
wait(&s)
...
(wakeup)
...
post(&s)

Calling wait() will block the process when the internal counter is negative (no busy waiting)

  1. The process joins the blocked queue
  2. The process/thread state is changed from running to blocked
  3. Control is transferred to the process scheduler.

Calling post() removes a process/thread from the blocked queue if the counter is less than or equal to 0.

  1. The process/thread state is changed from *blocked to ready
  2. Different queuing strategies can be employed to remove process/threads e.g. FIFO etc

The negative value of the semaphore is the number of processes waiting for the resource.

block() and wait() are system called provided by the OS.

post() and wait() must be atomic

Can be achieved through the use of mutexes (or disabling interrupts in a single CPU system)

Busy waiting is moved from the critical section to wait() and post() (which are short anyway - the original critical sections themselves are usually much longer)

We must lock the variable value.

post(semaphore * S) {
    // lock the mutex
	S->value++;
	if (S->value <= 0) {
        remove a process P from S->list;
        wakeup(P); // system call
	}
    //unlock mutex
}

Semaphores put your code to sleep. Mutexes apply busy waiting to user code.

Semaphores within the same process can be declared as global variables of the type sem_t

  • sem_init() - initialises the value of the semaphore.
  • sem_wait() - decrements the value of the semaphore.
  • sem_post() - increments the values of the semaphore.

img

Synchronising code does result in a performance penalty

  • Synchronise only when necessary

  • Synchronise as few instructions as possible (synchronising unnecessary instructions will delay others from entering their critical section)

void * calc(void * increments) {
    int i, temp = 0;
    for(i = 0; i < *((int*) increments);i++) {
    	temp++;
    }
    sem_wait(&s);
    sum+=temp;
    sem_post(&s);
}

Figure: optimised calc function

Starvation

Poorly designed queueing approaches (e.g. LIFO) may results in fairness violations

Deadlock

Two or more processes are waiting indefinitely for an event that can be caused only by one of the waiting processes or thread.

Priority Inversion

Priority inversion happens when a high priority process (H) has to wait for a resource currently held by a low priority process (L)

Priority inversion can happen in chains e.g. H waits for L to release a resource and L is interrupted by a medium priority process M.

This can be avoided by implementing priority inheritance to boost L to the H's priority.

The Producer and Consumer Problem

  • Producer(s) and consumer(s) share N buffers (an array) that are capable of holding one item each like a printer queue.
    • The buffer can be of bounded (size N) or unbounded size.
    • There can be one or multiple consumers and or producers.
  • The producer(s) add items and goes to sleep if the buffer is full (only for a bounded buffer)
  • The consumer(s) remove items and goes to sleep if the buffer is empty

The simplest version of this problem has one producer, one consumer and a buffer of unbounded size.

  • A counter (index) variable keeps track of the number of items in the buffer.
  • It uses two binary semaphores:
    • sync synchronises access to the buffer (counter) which is initialised to 1.
    • delay_consumer ensures that the consumer goes to sleep when there are no items available, initialised to 0.

img

It is obvious that any manipulations of count will have to be synchronised.

Race conditions will still exist:

When the consumer has exhausted the buffer (when items == 0), it should go to sleep but the producer increments items before the consumer checks it.

  • Consumer has removed the last element
  • The producer adds a new element
  • The consumer should have gone to sleep but no longer will
  • The consumer consumes non-existing elements

Solutions:

  • Move the consumers' if statement inside the critical section

Producers and Consumers Problem

The previous code (one consumer one producer) is made to work by storing the value of items

A different variant of the problem has n consumers and m producers and a fixed buffer size N. The solution is based on 3 semaphores

  1. sync - used to enforce mutual exclusion for the buffer

  2. empty - keeps track of the number of empty buffers, initialised to N

  3. full - keeps track of the number of full buffers initialised to 0.

The empty and full are counting semaphores and represent resources.

img