× back IPC Critical section problem Intro to deadlock Banker's Algorithm Deadlock recovery
Next Topic → ← Previous Topic

Process Synchronization

Inter-process communication (IPC)

Shared Memory Method

Example: Producer-Consumer problem

There are two processes: Producer and Consumer. The producer produces some items and the Consumer consumes that item. The two processes share a common space or memory location known as a buffer where the item produced by the Producer is stored and from which the Consumer consumes the item if needed. There are two versions of this problem: the first one is known as the unbounded buffer problem in which the Producer can keep on producing items and there is no limit on the size of the buffer, the second one is known as the bounded buffer problem in which the Producer can produce up to a certain number of items before it starts waiting for Consumer to consume it. We will discuss the bounded buffer problem. First, the Producer and the Consumer will share some common memory, then the producer will start producing items. If the total produced item is equal to the size of the buffer, the producer will wait to get it consumed by the Consumer. Similarly, the consumer will first check for the availability of the item. If no item is available, the Consumer will wait for the Producer to produce it. If there are items available, Consumer will consume them.

Message passing method

In this method, processes communicate with each other without using any kind of shared memory. If two processes p1 and p2 want to communicate with each other, they proceed as follows:

  • Establish a communication link (if a link already exists, no need to establish it again.)
  • Start exchanging messages using basic primitives. We need at least two primitives:
    – send(message, destination) or send(message)
    – receive(message, host) or receive(message)

What is critical section problem?

Rules of Critical Section

Peterson's Algorithm For Critical Section Problem

  • This is a software based solution to Critical Section Problem.
  • Doesn’t work on modern architectures.
  • It’s for 2 processes which alternate execution between the critical section and remainder section. Say, P1 is the first process and P2 is the second process.
  • The 2 processes should share 2 data items with each other.

    int turn
    Boolean flag [2]

  • Turn – It indicates the process who should enter into its critical section.
  • Flag Array – It tells whether a process is ready to enter its critical section. Let flag[0] indicate process P1. If flag[0] = true , then Process P1 is ready to execute in its critical section. flag[1] indicates process P2. If flag[1] = true, then Process P2 is ready to execute in its critical section.

Structure of process Pi

                    
while(true)
{
    flag[i] = true; // entry section
    turn = j;
    while(flag[j] == true && turn == j); // checking if other process is interested or it is already in CS.
        // critical section 

    flag[i] = false; // exit section
}
                    
                

P0

                    
while(true)
{
    flag[0] = true;
    turn = 1;
    while(flag[1] == true && turn == 1);
        // critical section 

    flag[0] = false;
}
                    
                

P1

                    
while(true)
{
    flag[1] = true;
    turn = 0;
    while(flag[0] == true && turn == 0);
        // critical section 

    flag[1] = false;
}
                    
                

Semaphores in Process Synchronization

  • Semaphores are a synchronization mechanism used to coordinate the activities of multiple processes in a computer system. They are used to enforce mutual exclusion, avoid race conditions and implement synchronization between processes.
  • Semaphores provide two operations: wait (P) and signal (V). The wait operation decrements the value of the semaphore, and the signal operation increments the value of the semaphore. When the value of the semaphore is zero, any process that performs a wait operation will be blocked until another process performs a signal operation.
  • Semaphores are used to implement critical sections, which are regions of code that must be executed by only one process at a time. By using semaphores, processes can coordinate access to shared resources, such as shared memory or I/O devices.
  • Semaphores are of two types:
    1. Binary Semaphore:
      This is also known as a mutex lock. It can have only two values - 0 and 1. Its value is initialized to 1. It is used to implement the solution of critical section problems with multiple processes.
    2. Counting Semaphore:
      Its value can range over an unrestricted domain. It is used to control access to a resource that has multiple instances.

Now let us see how it does so.

  • First, look at two operations that can be used to access and change the value of the semaphore variable.
                    
P(Semaphore s)
{
    while(s == 0); /* wait until s = 0 */
    s = s - 1;
}

V(Sempahore s)
{
    s = s + 1;
}
                    
                

Some points regarding P and V operaton:

  1. P operation is called wait, sleep, or down operation, and V operation is also called signal, wake-up, or up operation.
  2. Both operations are atomic and semaphore(S) is always initialzed to one. Here atomic means that variable on which read, modify and update happens at the same time with no pre-emption i.e. in-between read, modify and update no other operation is performed that may change the variable.
  3. A critical section is surrounded by both operations to implement process synchronization.

Now, let use see how it implements mutual exclusion.

  • Let there be two processes P1 and P2 and a semaphore s is initialized as 1. Now if suppose P1 enters in its critical section then the value of semaphore s become 0. Now if P2 want to enter its critical section and called V operation on semaphore s.
  • This way mutual exclusion is achieved.

Limitations:

  1. One of the biggest limitations of semaphore is priority inversion.
  2. Deadlock, suppose a process is trying to wake up another process that is not in a sleep state. Therefore, a deadlock may block indefinitely.
  3. The operating system has to keep track of all calls to wait and singal the semaphore.

The main problem with semaphores is that they require busy waiting, if a process is in the critical section, then other processes trying to enter the critical section will be waiting until the critical section is not occupied by any other process. Whenever any process waits then it continously checks for semaphore value (look at this likne while(s == 0); in P operation) and waste CPU cycle.

Advantages of Semaphore

  • A simple and effective mechanism for process synchronization.
  • Supports coordination between multiple processes.
  • Provides a flexible and robust way to manage shared resources.
  • It can be used to implement critical sections in a program.
  • It can be used to avoid race conditions.

Disadvantages of Semaphore

  • It can lead to performance degradation due to overload associated with wait and signal operations.
  • Can result in deadlock if used incorrectly.
  • It was proposed by Dijshtra in 1965 which is a very significant technique to manage concurrent processes by using a simple integer value, which is known as a semaphore. A semahore is simply an integer variable that is shared between threads. This variable is used to solve the critical section problem and to achieve process synchronization in the multiprocessing environment.
  • It can cause performance issues in a program if not used properly.
  • It can be difficult to debug and maintain.
  • It can be prone to race conditions and other synchronization problems if not used correctly.
  • It can be vulnerable to certain types of attacks, such as denial of service attacks.

Introduction of Deadlock in operating system

Examples of deadlock

  1. The system has 2 tape drives. P1 and P2 each hold one tape drive and each needs another one.
  2. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as follows:
    • P0 executes wait(A) and preempts
    • P1 executes wait(B)
    • Now P0 and P1 enter in deadlock.

Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)

Methods for handling deadlock

There are three ways to handle deadlock

1. Deadlock Prevention

  • Deadlock prevention aims to design the system in a way that it eliminates one or more necessary conditions for deadlock to occur. By ensuring that at least one of the four necessary conditions—mutual exclusion, hold and wait, no preemption, and circular wait—is not satisfied, deadlocks can be prevented. This approach requires careful system design and resource management.

Let's see how we can prevent each of the conditions.

  1. Mutual exclusion: Mutual exclusion refers to the condition where resources can be accessed by only one process at a time. To prevent this condition, you can implement techniques such as resource sharing or resource partitioning. Resource sharing involves allowing multiple processes to access a resource simultaneously, as long as their operations on the resource do not interfere with each other. Resource partitioning involves dividing resources into smaller units that can be allocated to different processes independently. By doing so, processes can access the resource concurrently without violating mutual exclusion.
  2. Hold and Wait: Hold and wait refers to the condition where a process holds a resource while waiting to acquire additional resources. To prevent hold and wait, you can use the "resource allocation only when all resources are available" approach. This means that a process must request and acquire all the required resources before it begins execution, ensuring that it does not hold any resources while waiting for others. Alternatively, you can employ the "resource allocation in a predetermined order" approach, where processes must request and acquire resources in a specific order to avoid holding resources and waiting indefinitely.
  3. No Preemption: No preemption refers to the condition where resources cannot be forcibly taken away from a process. To prevent this condition, you can introduce resource preemption. Resource preemption involves forcibly reclaiming resources from one process to allocate them to another. By allowing resource preemption, you can break resource deadlock situations by taking resources from one process and reallocating them to another process that needs them. However, preemption should be used with caution as it can introduce complexities and potential performance issues.
  4. Circular Wait: Circular wait refers to the condition where a circular chain of processes exists, with each process holding a resource that the next process in the chain needs. To prevent circular wait, you can introduce a technique called "resource ordering" or "resource numbering." In this approach, all resources are assigned a unique number, and processes are required to request resources in increasing order of their numbers. By enforcing a strict ordering of resource requests, circular wait situations are eliminated.

2. Deadlock avoidance

  • In deadlock avoidance, the request for any resource will be granted if the resulting state of the system doesn't cause deadlock in the system. The state of the system will continously be checked for safe and unsafe states.
  • In order to avoid deadlocks, the process must tell OS, the maximum number of resources a process can request to complete its execution.
  • The simplest and most useful approach states that the process should declare the maximum number of resources of each type it may ever need. The Deadlock avoidance algorithm examines the resource allocations so that there can never be a circular wait condition.

3. Deadlock ignorance

  • If a deadlock is very rare, then let it happen and reboot the system. This is the approach that both Windows and UNIX take. In deadlock ignorance performance is better than the above two methods.

Banker's Algorithm in OS

Advantages

  1. It helps the OS to manage and control process requests for each type of resource in the computer system.
  2. Deadlock avoidance: The Banker's algorithm is designed to avoid deadlock by examining the resource allocation state and determining if granting a resource request will result in a safe state. It ensures that resources are allocated to processes in a way that guarantees the avoidance of deadlock. This proactive approach helps maintain system stability and prevents the occurrence of deadlocks.
  3. Resource Optimization: The Banker's algorithm optimizes resource utilization by allocating resources based on the maximum needs of processes and available resources. It ensures that resources are allocated efficiently without wasting them, as it considers both the current resource allocation state and future resource requests. This leads to better resource utilization and overall system performance.
  4. Safety Guarantee: The Banker's algorithm provides a safety guarantee by only granting resource requests that will maintain a safe state. A safe state means that the system can satisfy the resource needs of all processes, allowing them to complete their execution without entering a deadlock state. The algorithm's safety guarantee ensures that the system can progress and complete its tasks without being stuck in a deadlock situation.
  5. Avoidance of Starvation: The Banker's algorithm prevents the occurrence of resource starvation. By considering the maximum resource needs of processes and allocating resources in a way that maintains a safe state, the algorithm ensures that every process eventually receives the resources it requires. This prevents situations where certain processes are continuously denied resources and unable to make progress, leading to starvation.
  6. Flexibility and Adaptability: The Banker's algorithm can dynamically respond to resource requests and release events. It can handle varying resource demands and dynamically adjust the resource allocation to prevent deadlock. This flexibility allows the algorithm to adapt to changing system conditions and ensure that resource allocation remains deadlock-free.

Disadvantages of Banker's algorithm

  1. Resource-Heavy: The Banker's algorithm requires significant computational overhead and bookkeeping. It needs to maintain information about the maximum resource needs, allocated resources, and available resources for each process. This can lead to increased memory and processing requirements, especially in large-scale systems with numerous processes and resources.
  2. Strict Resource Requirement Information: The Banker's algorithm relies on accurate and complete information regarding the maximum resource needs of processes. This information must be known in advance or provided by the processes. If the maximum resource needs are underestimated or not accurately known, the algorithm may result in unnecessary resource allocation delays or even false deadlock detection.
  3. Lack of Dynamic Resource Allocation: The Banker's algorithm assumes a static resource allocation environment where the maximum resource needs of processes are fixed and known in advance. It may not be suitable for systems with dynamic resource demands or where processes' resource needs change during runtime. Adapting the algorithm to handle dynamic resource allocation can be complex and may require additional mechanisms.
  4. Resource Holding Time and Efficiency: The Banker's algorithm follows a conservative approach by requiring a process to request and hold all the resources it needs before starting its execution. This can lead to inefficient resource utilization, especially when a process holds resources for an extended period, even if it is not actively using them. Consequently, resource holding time can increase, impacting system performance.
  5. Limited Preemption Support: The Banker's algorithm does not inherently support resource preemption, which involves forcibly reclaiming resources from one process and reallocating them to another. Preemption is a useful mechanism to resolve deadlocks by breaking resource dependencies. However, implementing preemption within the context of the Banker's algorithm can be challenging and may require additional measures.

When working with Banker's algorithm, it requests to know about three things:

  1. How much each process can request for each resource in system. It is denoted by the [MAX] request.
  2. How much each process is currently holding each resource in a system. It is denoted by the [ALLOCATED] resource.
  3. It represents the number of each resource currently available in the system. It is denoted by the [AVAILABLE] resource.

Following are the important data structures terms applied in the banker's algorithm as follows:

Suppose n is the number of processes, and m is the number of each type of resource used in a computer system.

  1. Available: It is an array of length 'm' that defines each types of resource available in the system. When Available[j] = K, means that 'K' instances of Resources type R[j] are available in the system.
  2. Max: It is a [n x m] matrix that indicates each process P[i] can store the maximum number of resources R[j] (each type) in a system.
  3. Allocation: It is a matrix of m x n orders that indicates the type of resources currently allocated to each process in the system. When Allocation [i, j] = K, it means that process P[i] is currently allocated K instances of Resources type R[j] in the system.
  4. Need: It is an M x N matrix sequence representing the number of remaining resources for each process. When the Need[i] [j] = k, then process P[i] may require K more instances of resources type Rj to complete the assigned work. Nedd[i][j] = Max[i][j] - Allocation[i][j].
  5. Finish: It is the vector of the order m. It includes a Boolean value (true/false) indicating whether the process has been allocated to the requested resources, and all resources have been released after finishing its task.d

Example:

Consider a system that contains five processes P1, P2, P3, P4, P5 and the three resource types A, B and C. Following are the resources types: A has 10, B has 5 and the resource type C has 7 instances.

Answer the following questions using the banker's algorithm:

  1. What is the reference of the need matrix?
  2. Determine if the system is safe or not?
  3. What will happen if the resource request (1, 0, 0) for process P1, can the system accept this request immediately?

Ans 1.

Content of the need matrix is as follows:
Need[i] = Max[i] - Allocation[i]
Need for P1: (7, 5, 3) - (0, 1, 0) = 7, 4, 3
Need for P2: (3, 2, 2) - (2, 0, 0) = 1, 2, 2
Need for P3: (9, 0, 2) - (3, 0, 2) = 6, 0, 0
Need for P4: (2, 2, 2) - (2, 1, 1) = 0, 1, 1
Need for P5: (4, 3, 3) - (0, 0, 2) = 4, 3, 1

Hence, we created the context of need matrix.

Ans 2.

Apply the Banker's algorithm
Available resources of A, B and C are 3, 3 and 2.
Now we check if each type of resource request is available for each process.

Steps ↓

  1. For process P1:
    • P1 Need <= Available
    • 7, 4, 3 <= 3, 3, 2 condition is false
    • So, we examine another process P2.
  2. For process P2:
    • P2 Need <= Available
    • 1, 2, 2 <= 3, 3, 2 condition is true
    • New available = available + Allocation of P2
    • (3, 3, 2) + (2, 0, 0) → 5, 3, 2
    • Similarly, we examine another process P3.
  3. For process P3:
    • P3 Need <= Available
    • 6, 0, 0 <= 5, 3, 2 condition is false
    • Similarly, we examine another process P4.
  4. For process P4:
    • P4 Need <= Available
    • 0, 1, 1 <= 5, 3, 2 condition is true
    • New available = available + Allocation of 42
    • (5, 3, 2) + (2, 1, 1) → 7, 4, 3
    • Similarly, we examine another process P5.
  5. For process P5:
    • P5 Need <= Available
    • 4, 3, 1 <= 7, 4, 3 condition is true
    • New available = available + Allocation of P5
    • (7, 4, 3) + (0, 0, 2) → 7, 4, 5
    • Now, we again examine each type of resource request for process P1 and P3.
  6. For process P1:
    • P1 Need <= Available
    • 7, 4, 3 <= 7, 4, 5 condition is true
    • New available = available + Allocation of P1
    • (7, 4, 5) + (0, 1, 0) → 7, 5, 5
    • Similarly, we examine another process P2.
  7. For process P3:
    • P3 Need <= Available
    • 6, 0, 0 <= 7, 5, 5 condition is true
    • New available = available + Allocation of P1
    • (7, 5, 5) + (3, 0, 2) → 10, 5, 7
Hence, we execute the banker's algorithm to find the safe state and the safe sequence like P2, P4, P5, P1 and P3.

Ans 3.

For granting the request (1, 0, 2) first we have to check that if Request <= Available, that is (1, 0, 2) <=(3, 3, 2), since the condition is true. So the process gets the request immediately.

Deadlock Recovery

1. Process termination

  • To eleminate the deadlock, we can simply kill one or more processes. For this, we use two methods:
    1. Abort all the Deadlock processes:
      Aborting all the processes will certainly break the deadlock, but with a great expense. The deadlocked processes may have computed for a long time and the result of those partial computations must be discarded and there is a probability to recalculate them later.
    2. Abort one process at a time until deadlock is eliminated:
      Abort one deadlocked process at a time, until deadlock cycle is eliminated from the system. Due to this method, there may be considerable overhead, because after aborting each process, we have to run deadlock detection algorithm to check whether any processes are still deadlocked.

Advantages of Process Termination

  • It is a simple method for breaking a deadlock.
  • It ensures that the deadlock will be resolved quickly, as all processes involved in the deadlock are terminated simultaneously.
  • It frees up resources that were being used by the deadlocked processes, making those resources available for other processes.

Disadvantages of Process Termination:

  • It can result in the loss of data and other resources that were being used by the terminated processes.
  • It may cause further problems in the system if the terminated processes were critical to the system’s operation.
  • It may result in a waste of resources, as the terminated processes may have already completed a significant amount of work before being terminated.

2. Resource Preemption:

  • To eliminate deadlocks using resource preemption, we preempt some resources from processes and give those resources to other processes. This method will raise three issues:
    1. Selecting a victim:
      We must determine which resources and which processes are to be preempted and also the order to minimize the cost.
    2. Rollback:
      We must determine what should be done with the process from which resources are preempted. One simple idea is total rollback. That means abort the process and restart it.
    3. Starvation:
      In a system, it may happen that same process is always picked as a victim. As a result, that process will never complete its designated task. This situation is called Starvation and must be avoided. One solution is that a process must be picked as a victim only a finite number of times.

Advantages of Resource Preemption:

  • It can help in breaking a deadlock without terminating any processes, thus preserving data and resources.
  • It is more efficient than process termination as it targets only the resources that are causing the deadlock.
  • It can potentially avoid the need for restarting the system.

Disadvantages of Resource Preemption:

  • It may lead to increased overhead due to the need for determining which resources and processes should be preempted.
  • It may cause further problems if the preempted resources were critical to the system’s operation.
  • It may cause delays in the completion of processes if resources are frequently preempted.

Previous Year Questions

Q- What do you mean by semaphore? Explain its types with C struct. Explain implementation of wait() and signal.

Q- What is deadlock? Explain the necessary conditions for its occurence. How to remove deadlock from process?

Q- Consider a system that contains five processes P1, P2, P3, P4, P5 and the three resource types A, B and C. Following are the resources types: A has 10, B has 5 and the resource type C has 7 instances:

                    
Process    Allocation      Max      Available 
           A   B   C     A  B  C     A  B  C
   P1      0   1   0     7  5  3     3  3  2
   P2      2   0   0     3  2  2     
   P3      3   0   2     9  0  2     
   P4      2   1   1     2  2  2     
   P5      0   0   2     4  3  3    
                  
                
Answer the following questions using the Banker's algorithm:
  1. What is the reference of the need matrix?
  2. Determine if the system is safe or not.
  3. What will happen if the resource request (1, 0, 0) for process P1, can the system accept this request immediately?

Q- What do you understand by critical section? Discuss whether Peterson's solution satifies the requirement of a mechanism to control access to critical section?

Q- Write short notes on the following:

  1. Deadlock
  2. Starvation
  3. Semaphores

Find all the safe sequence possible in the below resource allocation scenario: {total available resources are (R1, R2, R3) = (3, 3, 4)}

                
Process    Allocated        Max Need     Available 
           R1   R2   R3    R1  R2  R3    R1  R2  R3
   P1      0    1    0     7   5   3     3   3   4
   P2      2    0    2     3   2   2     
   P3      3    0    2     9   0   2     
   P4      2    1    1     2   2   2     
              
            

Tutorials ↓