close

Se connecter

Se connecter avec OpenID

Chapter 7

IntégréTéléchargement
Chapter 5: Process Synchronization
 Background
 The Critical-Section Problem
 Peterson’s Solution
 Synchronization Hardware based solution
 Semaphores based solution
 Classic Problems of Synchronization
 Monitors – Language level solution
 Synchronization Examples
Operating System Concepts
6.1
Silberschatz, Galvin and Gagne ©2005
Background
 Concurrent access to shared data may result in data inconsistency.
 Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes.
 Bounded buffer problem:

There are N buffers

Two types of processes, namely, Consumers and Producers
have access to the N buffers.

Producers produce items and place the items in the empty
buffers.

Consumers consume the items placed in the buffers

Producers and consumers need to be synchronized to access
the shared resource (buffers).
Operating System Concepts
6.2
Silberschatz, Galvin and Gagne ©2005
An Incorrect Solution to the BoundedBuffer Problem
 Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
int counter = 0;
Operating System Concepts
6.3
Silberschatz, Galvin and Gagne ©2005
An Incorrect Solution to …
 A Producer process
item nextProduced;
while (1) {
while (counter == BUFFER_SIZE)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Operating System Concepts
6.4
Silberschatz, Galvin and Gagne ©2005
An Incorrect Solution to …
 A Consumer process
item nextConsumed;
while (1) {
while (counter == 0)
; /* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
}
Operating System Concepts
6.5
Silberschatz, Galvin and Gagne ©2005
An Incorrect Solution to …
 The statements
counter++;
counter--;
must be performed atomically.
 Atomic operation means an operation that completes in its entirety
without interruption.
Operating System Concepts
6.6
Silberschatz, Galvin and Gagne ©2005
An Incorrect Solution to …
 The statement “counter++” may be implemented in machine
language as:
register1 = counter
register1 = register1 + 1
counter = register1
 The statement “counter--” may be implemented as:
register2 = counter
register2 = register2 – 1
counter = register2
Operating System Concepts
6.7
Silberschatz, Galvin and Gagne ©2005
An Incorrect Solution to …
 If both the producer and consumer attempt to update the buffer
concurrently, the assembly language statements may get
interleaved.
 Interleaving depends upon how the producer and consumer
processes are scheduled.
Operating System Concepts
6.8
Silberschatz, Galvin and Gagne ©2005
An Incorrect Solution to …
 Assume counter is initially 5. One interleaving of statements is:
producer: register1 = counter (register1 = 5)
producer: register1 = register1 + 1 (register1 = 6)
consumer: register2 = counter (register2 = 5)
consumer: register2 = register2 – 1 (register2 = 4)
producer: counter = register1 (counter = 6)
consumer: counter = register2 (counter = 4)
 The value of count may be either 4 or 6, where the correct result
should be 5.
Operating System Concepts
6.9
Silberschatz, Galvin and Gagne ©2005
Race Condition
 Race condition: The situation where several processes access
and manipulate shared data concurrently. The final value of the
shared data depends upon which process finishes last.
 To guard against inconsistencies that would arise due to race
conditions, concurrent processes must be synchronized.
Operating System Concepts
6.10
Silberschatz, Galvin and Gagne ©2005
The Critical-Section Problem
 n processes all competing to use some shared data
 Each process has a code segment, called critical section, in
which the shared data is accessed.
 The Critical section Problem – ensure that when one process
is executing in its critical section, no other process is allowed to
execute in its critical section.
Operating System Concepts
6.11
Silberschatz, Galvin and Gagne ©2005
Requirements of a Solution to Critical-Section
Problem
1. Mutual Exclusion. If process Pi is executing in its critical section,
then no other processes can be executing in their critical sections.
2. Progress. If no process is executing in its critical section and
there exist some processes that wish to enter their critical section,
then the selection of the process that will enter the critical section
next cannot be postponed indefinitely.
3. Bounded Waiting. A bound must exist on the number of times
other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before
that request is granted.
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes.
Operating System Concepts
6.12
Silberschatz, Galvin and Gagne ©2005
Support for Solving the Critical Section
Problem
 No Support: It is the programmers responsibility to ensure mutually
exclusive access to shared resources when several processes are
allowed to access a resource concurrently
 Hardware support: Some hardware instructions are provided to
support the programmer (e.g., ‘testandset’, and ‘swap’ instructions)
 Operating System Support: Operating system supports for the
declaration of data structures and also operations on those data
structures (e.g., semaphores)
 High Level Language Support: Language provides support for
data structures and operations on them to help with synchronization.
(e.g., critical regions, monitors, serializers, etc)
Operating System Concepts
6.13
Silberschatz, Galvin and Gagne ©2005
Initial Attempts to Solve the Problem
Without any Support
 Only 2 processes, P0 and P1
 General structure of process Pi (other process Pj
where j=1-i)
do {
entry section
critical section
exit section
remainder section
} while (1);
 Processes may share some common variables to
synchronize their actions.
Operating System Concepts
6.14
Silberschatz, Galvin and Gagne ©2005
Algorithm 1


Shared variables:

int turn;
initially turn = 0

(turn ==i)  Pi can enter its critical section
Process Pi :
do {
while (turn != i) ; // loop if it is not my turn
critical section
turn = j;
remainder section
} while (1);

Satisfies mutual exclusion, but not progress requirement because it
requires strict alternation of processes.
Operating System Concepts
6.15
Silberschatz, Galvin and Gagne ©2005
Algorithm 2
 Shared variables

boolean flag[2];
initially flag [0] = flag [1] = false.
flag [i] = true  Pi is ready to enter its critical section
 Process Pi
do {
flag[i] = true;
while (flag[j]) ;
critical section
flag [i] = false;
remainder section
} while (1);

 Satisfies mutual exclusion, but still does not satisfy progress
requirement because what if (flag[i] == true) and (flag[j] ==
true);
Operating System Concepts
6.16
Silberschatz, Galvin and Gagne ©2005
Algorithm 3 (Peterson’s Solution)
 Combined the ideas of algorithms 1 and 2.
 Process Pi
do {
flag [i]= true;
turn = j;
while (flag [j] and turn == j) ;
critical section
flag [i] = false;
remainder section
} while (1);
 Meets all three requirements; solves the critical-section problem
for two processes.
Operating System Concepts
6.17
Silberschatz, Galvin and Gagne ©2005
Synchronization Hardware
 Many systems provide hardware support for synchronizing
critical section code
 Uniprocessors – could disable interrupts
 Currently running code would execute without
preemption
 Generally too inefficient on multiprocessor systems
 Modern machines provide special atomic hardware
instructions
 Atomic = non-interruptible
 Either test memory word and set a value
 Or swap contents of two memory words
Operating System Concepts
6.18
Silberschatz, Galvin and Gagne ©2005
TestAndndSet Instruction
 Definition:
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv;
}
Operating System Concepts
6.19
Silberschatz, Galvin and Gagne ©2005
Solution using TestAndSet
 Shared boolean variable lock., initialized to false.
 Solution:
do {
while ( TestAndSet (&lock ))
;
//
/* do nothing
critical section
lock = FALSE;
//
remainder section
} while ( TRUE);
Note: This solution ensures mutual exclusion, and progress
property but does not meet bounded waiting criteria
Operating System Concepts
6.20
Silberschatz, Galvin and Gagne ©2005
Swap Instruction
 Definition:
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
Operating System Concepts
6.21
Silberschatz, Galvin and Gagne ©2005
Solution using Swap
 Shared Boolean variable lock initialized to FALSE; Each
process has a local Boolean variable key.
 Solution:
do {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );
// critical section
lock = FALSE;
//
remainder section
} while ( TRUE);
Note: This solution also does not meet the bounded waiting
criteria.
Operating System Concepts
6.22
Silberschatz, Galvin and Gagne ©2005
A solution satisfying all the three criteria

Shared data (initialized to false):
boolean lock;
boolean waiting[n]; // n is the total number of processes.

Process Pi
do {
waiting[i]=true;
key = true;
while (waiting[i] && key )
key=TestAndSet(lock);
waiting[i]=false;
critical section
j = (i+1)% n;
while ((j!=i) && !waiting[j])
j=(j+1) % n;
if (j==i) // if no process is waiting for access to CS
lock=false;
else
waiting[j]=false; // signal a waiting process
remainder section
} while(1);
Operating System Concepts
6.23
Silberschatz, Galvin and Gagne ©2005
Semaphores
 All of the above solutions require busy waiting (spin locks)
 Goal : To design and implement a synchronization tool that does
not require busy waiting.
 Semaphore S – an integer variable
 A semaphore can only be accessed via two indivisible (atomic)
operations
wait (S):
while (S 0)
; // no-op;
S--;
signal (S):
S++;
Operating System Concepts
6.24
Silberschatz, Galvin and Gagne ©2005
Solution to Critical Section Problem
Using Semaphore

Shared data:
semaphore mutex; // initially mutex = 1

Process Pi:
do {
wait(mutex);
critical section
signal(mutex);
remainder section
} while (1);
This still doesn’t satisfy the bounded waiting property. Another problem with
all the solutions described so far is that a process trying to enter critical
section has to busy-wait in a loop. This wastes CPU cycles.
Operating System Concepts
6.25
Silberschatz, Galvin and Gagne ©2005
A Semaphore Implementation that
Prevents Busy-Waiting
 New definition of semaphore:

Define a semaphore as a record
typedef struct {
int value;
struct process *L;
} semaphore;
 Assume two simple operations:

block suspends the process that invokes it.

wakeup(P) resumes the execution of a blocked process P.
Operating System Concepts
6.26
Silberschatz, Galvin and Gagne ©2005
A Semaphore Implementation

Semaphore operations are now defined as (Recall that these operations
are atomic)
wait(S):
S.value- -;
if (S.value < 0) {
add this process to S.L;
block;
}
signal(S):
S.value+ +;
if (S.value <= 0) {
remove a process P from S.L;
wakeup(P);
}
Operating System Concepts
6.27
Silberschatz, Galvin and Gagne ©2005
Semaphore as a General Synchronization Tool
 To execute B in Pj only after A is executed in Pi

Use semaphore flag with flag.value initialized to 0

Code:
Operating System Concepts
Pi
Pj


A
wait(flag)
signal(flag)
B
6.28
Silberschatz, Galvin and Gagne ©2005
Deadlock and Starvation

Deadlock – two or more processes are waiting indefinitely for an event that
can be caused by only one of the waiting processes.

Let S and Q be two semaphores initialized to 1

P0
P1
wait(S);
wait(Q);
wait(Q);
wait(S);


signal(S);
signal(Q);
signal(Q)
signal(S);
Starvation – indefinite blocking. A process may never be removed from
the semaphore queue in which it is suspended.
Operating System Concepts
6.29
Silberschatz, Galvin and Gagne ©2005
Priority inversion
 What happens when a higher priority process needs to read or modify the kernel
data that are currently being accessed by a lower-priority process and the lower
priority process is preempted in favor of higher priority process (this happens often
in real time systems) -- Deadlock
 Solution: (Priority-inheritance protocol) All processes that are accessing
resources needed by a higher priority process inherit the higher priority until they
are finished with the resources in question.
 This protocol was designed by by L.Sha, R. Rajkumar and J.P. Lehoczky.
 See the related story about the NASA’s Mars Path finder probe (1997) that landed
a robot, the Soujourner rover, on Mars which had a software bug. Information
about it can be found at: http://users.ece.cmu.edu/~raj/mars.html or at
http://research.microsoft.com/en-us/um/people/mbj/Mars_Pathfinder/Mars_Pathfinder.html
Operating System Concepts
6.30
Silberschatz, Galvin and Gagne ©2005
Classical Problems of Synchronization
 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem
Operating System Concepts
6.31
Silberschatz, Galvin and Gagne ©2005
Bounded-Buffer Problem
 Shared data
semaphore full, empty, mutex;
Initially:
full.value = 0, empty.value = n, mutex.value = 1
Operating System Concepts
6.32
Silberschatz, Galvin and Gagne ©2005
Bounded-Buffer Problem - Producer Process
Producer Process:
do {
…
produce an item in nextp
…
wait(empty);
wait(mutex);
…
add nextp to buffer
…
signal(mutex);
signal(full);
} while (1);
Operating System Concepts
6.33
Silberschatz, Galvin and Gagne ©2005
Bounded-Buffer Problem - Consumer Process
Consumer Process:
do {
wait(full)
wait(mutex);
…
remove an item from buffer to nextc
…
signal(mutex);
signal(empty);
…
consume the item in nextc
…
} while (1);
Operating System Concepts
6.34
Silberschatz, Galvin and Gagne ©2005
Readers-Writers Problem

Shared data
int readcount;
semaphore mutex, // to synchronize access to ‘readcount’
wrt; // to ensure mutual exclusion among
// writers
Initially
mutex.value = 1, wrt.value = 1, readcount = 0
Writer Process:
wait(wrt);
…
writing is performed
…
signal(wrt);
Operating System Concepts
6.35
Silberschatz, Galvin and Gagne ©2005
Readers-Writers Problem - Reader Process
Reader Process:
wait(mutex); // synchronize access to variable readcount
readcount++;
if (readcount == 1) // if I am the first reader, then I wait for the writer writing and the writers
wait(wrt);
// waiting and block all the incoming writers.
signal(mutex);
…
reading is performed
…
wait(mutex);
readcount--;
if (readcount == 0) // If there is no more reader, then
signal(wrt); // wake up a waiting writer
signal(mutex);
A Problem with this solution: Writers will starve.
Try to modify the solution to prevent starvation.
Operating System Concepts
6.36
Silberschatz, Galvin and Gagne ©2005
Dining-Philosophers Problem
 Shared data
semaphore chopstick[5];
Initially, chopstick[i].value =1, for i=0,1,2,3,4.
Operating System Concepts
6.37
Silberschatz, Galvin and Gagne ©2005
Dining-Philosophers Problem

Philosopher i:
do {
wait(chopstick[i])
wait(chopstick[(i+1) % 5])
…
eat
…
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
…
think
…
} while (1);
Problem with this solution: deadlock.
Operating System Concepts
6.38
Silberschatz, Galvin and Gagne ©2005
Monitors (Language level support)

High-level synchronization construct that allows the safe sharing of an
abstract data type among concurrent processes.
monitor monitor-name
{
shared variable declarations
procedure body P1 (…) {
...
}
procedure body P2 (…) {
...
}
procedure body Pn (…) {
...
}
{
initialization code
}
}
Operating System Concepts
6.39
Silberschatz, Galvin and Gagne ©2005
Semantics of Monitors



Only one process can be active inside the monitor. i.e., when one process is executing a
procedure inside the monitor, no other process can be executing any procedure of the monitor
To allow a process to wait within the monitor, a condition variable must be declared, as
condition x, y;
A Condition variable can only be used with the operations wait , signal and queue.
 The operation
x.wait();
means that the process invoking this operation is suspended and placed in the queue
associated with x until another process invokes
x.signal();
 The x.signal operation resumes exactly one suspended process in the queue associated
with x. If there is no process in the queue associated with x, then the signal operation
has no effect.
 The x.queue operation returns true or false depending on whether a process is waiting in
the queue associated with the condition variable x. This operation does not have any other
effect on x.
Operating System Concepts
6.40
Silberschatz, Galvin and Gagne ©2005
Schematic View of a Monitor
Operating System Concepts
6.41
Silberschatz, Galvin and Gagne ©2005
Monitor With Condition Variables
Operating System Concepts
6.42
Silberschatz, Galvin and Gagne ©2005
Dining Philosophers Example
monitor dp
{
enum {thinking, hungry, eating} state[5];
condition self[5];
void pickup(int i)
// following slides
void putdown(int i)
// following slides
void test(int i)
// following slides
void init() {
for (int i = 0; i < 5; i++)
state[i] = thinking;
}
}
Operating System Concepts
6.43
Silberschatz, Galvin and Gagne ©2005
Dining Philosophers
void pickup(int i) {
state[i] = hungry;
test[i];
if (state[i] != eating)
self[i].wait();
}
void putdown(int i) {
state[i] = thinking;
// test left and right neighbors
test((i+4) % 5);
test((i+1) % 5);
}
Operating System Concepts
6.44
Silberschatz, Galvin and Gagne ©2005
Dining Philosophers
void test(int i) {
if ( (state[(i+ 4) % 5] != eating) &&
(state[i] == hungry) &&
(state[(i + 1) % 5] != eating)) {
state[i] = eating;
self[i].signal();
}
}
Philosopher i:
dp.pickup(i);
eat; // Critical section
dp.putdown(i);
Operating System Concepts
6.45
Silberschatz, Galvin and Gagne ©2005
Synchronization in Linux
 The simplest synchronization is the atomic integer, which is represented using the
opaque data type atomic_t (a data type is opaque if it can only be modified or
read by accessing subroutines) Usage of this datatype:
atomic_t counter;
int value;
atomic_set(&counter 10);
atomic_add(5, &counter);
atomic_sub(10, &counter);
atomic_inc(&counter);
value = atomic_read(&counter);
 mutex_lock() , mutex_unlock() : These are similar to wait and signal operation on
semaphores for protecting critical sections within the kernel.
 spinlocks() and semaphores: Will talk more about this when I assign the
programming assignment.
Operating System Concepts
6.46
Silberschatz, Galvin and Gagne ©2005
Solaris 2 Synchronization

Implements a variety of locks to support multitasking, multithreading (including
real-time threads), and multiprocessing.

Uses adaptive mutexes for efficiency when protecting data from short code
segments.


An adaptive mutex starts as a standard semaphore implemented as a
spinlock. These are used to protect short critical sections (i.e., for less than
a few hundred instructions).

If the lock is held by a process that is currently running on another CPU,
the thread spins while waiting for the lock to become available, because
the thread holding the lock is likely to finish soon

If the thread holding the lock is not currently in run state, the thread
blocks and goes to sleep until it is awaken while the lock is released.

So, on a uniprocessor system, a thread always sleeps if it encounters a
lock being held by another process, because a thread holding the lock
cannot be running when another process is testing the lock
condition variables and semaphores are used for protecting longer critical
sections

If the desired lock is already held, the process issues a wait and sleeps

When a thread frees a lock, it issues a signal to the next sleeping thread in
the queue

The extra cost of putting a thread to sleep and waking it up, and the
associated context switches, is less than the cost of wasting several
hundred instructions waiting in spinlock.
Operating System Concepts
6.47
Silberschatz, Galvin and Gagne ©2005
Solaris 2 Synchronization…
 readers-writers locks are used to protect data that are accessed
frequently but usually in a read-only manner. These are also used for
protecting long sections of code.
 Uses turnstiles to order the list of processes waiting to acquire either
an adaptive mutex or reader-writer lock.

A turnstile is a queue structure containing threads blocked on a
lock.
 To prevent priority inversion problem, turnstiles are organized
according to a priority inheritance protocol

This means, that if a lower priority process currently holds a lock
that a higher priority process is blocked on, then the lower priority
process holding the lock will temporarily inherit the priority of the
higher priority process. Upon releasing the lock, the process will
revert to its original priority.
Operating System Concepts
6.48
Silberschatz, Galvin and Gagne ©2005
Windows 2000 Synchronization

Uses interrupt masks to protect access to global resources on uniprocessor systems.

Uses spinlocks on multiprocessor systems.



Just as in Solaris 2, the kernel uses spinlocks only to protect short code

Moreover the kernel ensures that the process will never be preempted while holding
a spinlock
Also provides dispatcher objects to provide synchronization outside the kernel

Using a dispatcher object, a process can synchronize according to several different
mechanisms including mutexes, semaphores and events

Shared data can be protected by requiring a thread to gain ownership of a mutex to
access the data and release ownership when it is finished.

Events are a synchronization mechanisms which act much like a condition variable.
Dispatcher objects may be signaled or nonsignaled

A signaled state indicates that an object is not available and a process will not block
when acquiring the object

A nonsignaled state indicates that an object is not available and the process will block
when attempting to acquire the object

When the state of a dispatcher object changes to signaled, the kernel checks if there
is any processes waiting on this object and if so, it moves one or more processes
from the waiting state to ready state where they can resume execution
Operating System Concepts
6.49
Silberschatz, Galvin and Gagne ©2005
An Alternative Approach –Transactional
Memory
 A concept borrowed from databases. A memory transaction is a sequence of
memory read-write operations that are atomic.

i.e. If all operations in a transaction are completed, the memory transaction is
committed. Otherwise, the operations must be aborted and rolled back.
 Consider the following example:

void update(){
Wait(P)
/* modify data*/
Signal(P)
}
 As we know, using synchronization mechanisms can lead to deadlock if improperly
used. However, new features such as atomic{S} that take advantage of
transactional memory can be added to the programming language. Then the
update() function above can be written as

void update(){
atomic{ /* modify data*/
}
}
Operating System Concepts
6.50
Silberschatz, Galvin and Gagne ©2005
Transactional memory …
 An advantage of transactional memory is that it relieves the developer from
worrying about synchronization.
 Transactional memory can be implemented either in software or hardware.

Software transactional memory: Implemented in the software. The compiler
inserts appropriate synchronization code to manage each transaction by
examining where statements may run concurrently where specific low level
locking is required.

Hardware transactional memory: Uses hardware cache hierarchies and
cache coherence protocols to manage and resolve conflicts involving cache
data residing in separate processor’s caches.
 Bad news: Transactional memory has not been implemented widely.
Significant amount of research is being done in this area.
Operating System Concepts
6.51
Silberschatz, Galvin and Gagne ©2005
Alternative approaches – OpenMP
 We saw earlier about OpenMP which includes a set of compiler
directives. Any code following the compiler directive #pragma omp
parallel is identified as a parallel region and is performed by the
number of threads equal to the number of processing cores.
 Along with directive #pragma omp parallel compiler directive,
OpenMP provides the compiler directive #pragma omp critical,
which specifies the code region following the directive as a critical
section in which only one thread can be active.
Operating System Concepts
6.52
Silberschatz, Galvin and Gagne ©2005
Auteur
Документ
Catégorie
Без категории
Affichages
56
Taille du fichier
486 Кб
Étiquettes
1/--Pages
signaler