× back About process management What is process? Process state Process scheduling Process queues CPU scheduling Purpose of scheduling algorithm FCFS SJF Round Robin algorithm Multiple-processor scheduling Processor Affinity Load balancing Multicore processors Process and Thread Multithreading in OS Previous year questions
Next Topic → ← Previous Topic

Process Management in OS

What is process?

Process States

Attributes of a process

  • The attributes of the process are used by the operating system to create the process control block (PCB) for each of them.
  • This is also called context of the process. Attributes which are stored in the PCB are described below.
    1. Process ID: When a process is created, a unique id is assigned to the process which is used for unique identification of the process in the system.
    2. Program counter: A program counter stores the address of the last instruction of the process on which the process was suspended. The CPU uses this address when the execution of this process is resumed.
    3. Process State: The process, from its creation to the completion, goes through various states which are new, ready, running and waiting.
    4. Priority: Every process has its own priority. The process with the highest priority among the process gets the CPU first. This is also stored on the process control block.
    5. General Purpose Registers: Every process has its own set of registers which are used to hold the data which is generated during the execution of the process.
    6. List of open files: During the execution, every process uses some files which need to be present in the main memory. OS also maintains a list of open files in the PCB.
    7. List of open devices: OS also maintain the list of all open devices which are used during the execution of the process.

Process Scheduling

Long term scheduler

  • Long term scheduler is also known as job scheduler. It chooses the process from the pool (secondary memory) and keeps them in the ready queue maintained in the primary memory.
  • This scheduler decides which programs should be executed from the pool of programs waiting in the job queue.
  • The long-term scheduler considers factors such as system utilization, available memory, and the types of programs that are currently running. It decides which programs should be admitted to the system, and it loads them into memory for execution.

Short term schedular

  • Once a process is admitted to the system, the short-term scheduler (also known as the CPU scheduler) takes over.
  • The short-term scheduler is responsible for selecting which process should be executed next from the pool of processes that are currently waiting in the ready queue.
  • This scheduler is a faster and more frequent process than the long-term scheduler, as it operates on a millisecond or nanosecond time scale. The short-term scheduler considers factors such as process priority, remaining burst time, and the amount of CPU time that each process has consumed so far.

Medium term scheduler

  • The medium-term scheduler is a less common type of scheduler that is used in some operating systems.
  • The medium-term scheduler is responsible for temporarily removing processes from the memory when there is a shortage of memory resources. It does this by swapping out some of the processes from the memory and storing them on the hard disk. This frees up memory resources for other processes to use. Later, when the swapped-out processes are needed again, the medium-term scheduler swaps them back into memory.

Process Queues

  1. Job Queue
    • In starting, all the process get stored in the job queue.
    • It is maintained in the secondary memory.
    • The long term scheduler (Job scheduler) picks some of the process and put them in the primary memory.
  2. Ready Queue
    • Ready queue is maintained in primary memory.
    • The short term scheduler picks the job from the ready queue and dispatch of the CPU for the execution.
  3. Waiting Queue
    • When the process needs some IO operation in order to complete its execution, OS Changes the state of the process from running to waiting.
    • The context (PCB) associated with the process gets stored on the waiting queue which will be used by the processor when the process finishes the IO.

Various Times related to Process:

  1. Arrival Time
    • The time at which the process enters into the ready queue.
  2. Burst Time
    • The total amount of time required by the CPU to execute the whole process is called the Burst Time.
    • This does not include the waiting time.
    • It is confusing to calculate the execution time for a process even before executing it hence the scheduling problems based on the burst time cannot be implemented in reality.
  3. Completion Time
    • The time at which the process enters into the completion state or the time at which the process completes its execution.
  4. Turn around time
    • The total amount of time spent by the process from its arrival to its completion.
  5. Waiting Time
    • The total amount of time for which the process waits for the CPU to be assigned.
  6. Response Time
    • The difference between the arrival time and the time at which the process first gets the CPU.

CPU Scheduling

Context switching diagram ↓

context switching

What is saved in the Process Control Block?

  • The OS maintains a process control block during the lifetime of the process.
  • Process control block is a data structure used in operating system to store all data related information to the process.
  • The process control block is deleted when the process is terminated or killed.
  • There is the following information which is saved in the PCB and is changing with the state of the process.
    1. Process ID → A unique identifier for the process.
    2. Process State → The current state of the process (running, waiting, etc.)
    3. Pointer → Pointers to the PCBs of the parent and child processes, and other relevant data structures.
    4. Priority → The priority of the process in relation to other processes in the system.
    5. Program Counter → The address of the next instruction to be executed.
    6. CPU Registers → The values of the CPU registers for the process.
    7. I/O status information → Information about the process's open files, pending I/O requests, and other I/O-related data.
    8. Accounting Information → The amount of CPU time used, clock time since process creation, and other information used for process accounting.
    9. etc.

Different crieteria used for CPU scheduling in an operating system:

  1. CPU utilization:This refers to the percentage of time that the CPU is being used. A scheduling algorithm that aims to maximize CPU utilization will try to keep the CPU busy as much as possible.
  2. Throughput: This refers to the number of processes that are completed per unit time. A scheduling algorithm that aims to maximize throughput will try to maximize the number of processes that are completed in a given amount of time.
  3. Turnaround Time: This is the amount of time it takes for a process to complete, from the moment it enters the ready queue until it finishes executing. A scheduling algorithm that aims to minimize turnaround time will try to minimize the time that processes spend waiting in the ready queue.
  4. Waiting Time: This is the amount of time that a process spends waiting in the ready queue. A scheduling algorithm that aims to minimize waiting time will try to minimize the amount of time that processes spend waiting in the ready queue.
  5. Response Time: This is the amount of time it takes for a process to start responding after a request has been made. A scheduling algorithm that aims to minimize response time will try to minimize the time that processes spend waiting in the ready queue before they start executing.

The purpose of a Scheduling algorithm

  1. Maximum CPU utilization
  2. Fare allocation of CPU
  3. Maximum throughput
  4. Minimum turnaround time
  5. Minimum waiting time
  6. Minimum response time

There are the following algorithms which can be used to schedule the process.

  1. First Come First Serve
  2. Shortest Job First
  3. Round Robin

First Come First Server CPU Process Scheduling in OS

First Come First Serve

  • FCFS is the first algorithm of CPU process scheduling algorithm.
  • FCFS allow the process to execute in linear manner.
  • This means that whichever process enters the ready queue first is executed first.
  • This shows that FCFS algorithm follows FIFO principle.
  • The FCFS algorithm can be executed in Pre Emptive and Non Pre Emptive manner.

Preemptive Approach

  • Here the OS allots the resources to a process for a predetermined period of time.
  • The process transitions from running state to ready state or from waiting state to ready state during resource allocation.
  • This switching happens because the CPU may assign other processes precedence and substitute the currently active process for the higher priority process.

Non Preemptive Approach

  • Here the resource cannot be withdrawn from a process before the process has finished running.
  • When a running process finishes and transitions to the waiting state, resources are switched.

Convoy Effect in FCFS

  • Convoy effect is a phenomenon which occurs in the scheduling algorithm named FCFS.
  • The convoy effect occurs when the FCFS Scheduling algorithm occurs in a way of non preemptive way.
  • The non preemptive way means that if a process or job is started execution, then the operating system must complete is process or job.
  • Until, the process is zero the next process does not start its execution.
  • The definition of Non Preemptive scheduling in terms of OS means that the CPU will be completely dedicated till the end of the process started first and the new process is executed only after finishing of the older process.
  • There may be a few cases, which might cause the CPU to allot a too much time. This is because in the FCFS sheduling algorithm non preemptive approach, the process are chose in serial oder. Due, to this shorter process behind the larger process takes too much time to complete its execution. Due, to this the WT, TAT, CT is very high.
  • If the first process if large or completion time is too high, then convoy effect in the FCFS algorithm is occured.
  • Let us assume that longer job takes infinite time to complete. Then, the remaining processes have to wait for the same infinite time. Due to this convoy effect created by the longer job the starvation of the waiting processes increases very rapidly. this is the biggest disadvantage of FCFS CPU process scheduling.

Characteristics of FCFS CPU Process Scheduling

  1. Implementation is simple.
  2. Does not cause any casualties while using.
  3. It adopts a non pre-emptive and pre-emptive strategy.
  4. It runs each procedure in the order that they are recieved.
  5. Arrival time is used as a selection criterin for procedures.

Advantages of FCFS CPU Process Scheduling

  1. In order to allocate processes, it uses the FIFO queue.
  2. The FCFS CPU Scheduling Process is straight forward and easy to implement.
  3. In the FCFS situation pre emptive scheduling, there is not chance of process starving.
  4. As there is no consideration of process priority, it is an equitable algorithm.

Disadvantages of FCFS CPU Process Scheduling

  • FCFS CPU Scheduling Algorithm has Long Waiting Time.
  • FCFS CPU Scheduling favors CPU over Input or Output operations.
  • In FCFS there is a chance of occurrence of Convoy Effect.
  • Because FCFS is so straight forward, it often isn't very effective. Extended waiting periods go hand in hand with this. All other orders are left idle if the CPU is busy processing one time-consuming order.

Problems on the FCFS CPU Scheduling Algorithm

This is how the FCFS is solved in Non Pre Emptive Approach.

Shortest Job First (SJF) Scheduling

Advantages of SJF

  • Maximum throughput
  • Minimum average waiting and turnaround time

Disadvantages of SJF

  • May suffer with the problem of starvation.
  • It is not implementable because the exact burst time for a process can't be known in advance.

Non-Preemptive shortest job first CPU Scheduling Algorithm example:

  • Advantages:
    • SJF is better than the FCFS algorithm as it reduces the average waiting time.
    • SJF is genrally used for long term scheduling.
    • It is suitable for the jobs running in batches, where run times are already known.
    • SJF is probably optimal in terms of average turnaround time.
  • Disadvantages:
    • SJF may cause very long turn-around times or starvation.
    • In SJF job completion timme must be known earlier, but sometimes it is hard to predict.
    • Sometimes, it is complicated to predict the length of the upcoming CPU request.
    • It leads to the starvation that does not reduce average turnaround time.

Preemptive SJF - Shortest Remaining Time First Scheduling Algorithm

  • In SRTF schefuling algorithm, the process with the smallest amount of time remaining until completion is selected to execute.
  • Since the currently executing process is the one with the shortest amount of time remaining by definition, and since that time should only reduce as execution progresses, process will always run until they complete or a new process is added that requires a smaller amount of time.

Examples to show working of Preemptive Shortest Job First CPU Scheduling Algorithm ↓

Round Robin Scheduling Algorithm

Characteristics of RR:

  • It is simle, easy to implement, and starvation-free as all process get fair share of CPU time.
  • One of the most commonly used technique in CPU scheduling as a core.
  • It is preempitve as process are assigned CPU only for a fixed slice of time at most.
  • The disadvantages of it is more overhead of context switching.

Advantages of RR:

  • There is fairness since every process gets equal share of CPU.
  • The newly created process is added to end of ready queue.
  • A round-robin scheduler generally employs time-sharing, giving each job a time slot or quantum.
  • While performing a round-robin scheduling, a particular time quantum is allotted to different jobs.
  • Each process get a chance to reschedule after a particular quantum time in this scheduling

Disadvantages of RR:

  • There is Larger waiting time and Response time.
  • There is Low throughput
  • There is Context Switches
  • Gantt chart seems to come too big (if quantum time is less for scheduling.
  • Time consuming scheduling for small quantum

Consider the following table of arrival time and burst time for four processes P1, P2, P3, and P4 and given Time Quantum = 2. ↓

Multiple-Processor Scheduling

Approaches to Multiple-Processor Scheduling:

  • One approach is when all the scheduling decisions and I/O processing are handled by a single processor which is called the master server and the other processors executes only the user code. This is simple and reduces the need of data sharing. This entire scenario is called asymmetric multiprocessing.
  • A second approach uses symmetric multiprocessing where each processor is self scheduling. All processes may be in common ready queue or each processor may have its own private queue for ready processes. The scheduling proceeds further by having the scheduler for each processor examine the ready queue and select a process to execute.

Processor Affinity

There are two types of processor affinity:

  1. Soft Affinity: When an operating system has a policy of attempting to keep a process running on the same processor but not guaranteeing it will do so, this situation is called soft affinity.
  2. Hard Affinity: Hard affinity allows a process to specify a subset of processors on which it may run. Some systems such as Linux implements soft affinity but also provide some system calls like sched_setaffinity() that supports hard affinity.

Load Balancing

There are two general approaches to load balancing:

  1. Push Migration: In push migration a task routinely checks the load on each processor and if it finds an imbalance then it evenly distributes load on each processors by moving the processes from overloaded to idle or less busy processors.
  2. Pull Migration: Pull Migration occurs when an idle processor pulls a waiting task from a busy processor for its execution.

Multicore Processors

Process and Thread

Difference between Process and Thread:

Multithreading in OS

Applicaitons

Threading is a technique used in computer programming to improve the performance and responsiveness of software applications. It involves dividing a program into multiple smaller threads or tasks that can be executed independently and concurrently

  • Threading is used widely in almost every field.
  • Most widely it is seen over the internet nowadays where we are using transaction processing of every type like reacharges, online transfer, banking etc.
  • GUI Applications: Threading is commonly used in Graphical User Interface (GUI) applications to keep the interface responsive while performing computationally intensive tasks. For example, when you click a button in a GUI application, a separate thread can be created to perform the task while the main thread keeps the interface responsive.
  • Network Applications: Threading can be used in network applications to improve performance and scalability. For example, a server application can use multiple threads to handle incoming client connections simultaneously.
  • Multimedia Applications: Threading is commonly used in multimedia applications to play audio and video files while simultaneously performing other tasks. For example, a media player can use a separate thread to play the media file while the main thread handles user input and interface updates.
  • Parallel Processing: Threading can be used in parallel processing applications to speed up computations by dividing a large task into smaller independent tasks that can be executed simultaneously on different threads.
  • Games: Threading can be used in games to keep the game running smoothly while performing tasks such as physics calculations, artificial intelligence, and rendering

Previous Year Questions

Q- What do you mean by PCB? Where is it used? What are its contents? Explain.

Q- Distinguish between the following:

Q- Consider four process P1, P2, P3 and P4 with arrival time (0, 2, 4, 5) and burst time (7, 4, 1, 4) respectively, what is the average waiting time and turnaround time in SJF and Round Robin scheduling algorithm (with time quantum of 2ns) respectively?

Q- Draw the process state diagram and explain functionality of each state in detail.

Q- Explain the following:

  1. Multithreading
  2. Context switching
  3. Schedulers

Assignment question answer

1. Some CPUs provide for more than two modes of operation. What are two possible uses of these multiple modes?

Ans. Multiple modes of operation in CPUs can be used for various purposes, including:

  1. Privilege separation: Some CPUs provide multiple modes of operation to separate the privilege levels of software running on the system. This enables the system to provide different levels of access to resources and hardware devices based on the privilege level of the software. For example, operating systems can use different modes of operation to protect system resources and prevent unauthorized access to them.
  2. Virtualization: Multiple modes of operation can also be used to enable virtualization, where multiple virtual machines (VMs) can run on a single physical machine. In this scenario, each VM can run in its own mode of operation, isolating it from other VMs and the host system. This provides increased security and flexibility, as each VM can run its own operating system and software stack, independently of other VMs on the same machine.

2.What are the three major activities of an operating system with regard to memory management?

Ans. The three major activities of an operating system with regard to memory management are:

  1. Allocation: The operating system is responsible for allocating memory to different processes and applications running on the system. When a process or application requests memory, the operating system determines whether there is enough free memory available and, if so, allocates a portion of memory to the process.
  2. Deallocation: The operating system is also responsible for deallocating memory that is no longer needed by a process. When a process terminates or no longer requires a portion of memory that it had previously allocated, the operating system deallocates that memory and makes it available for other processes to use.
  3. Protection: The operating system must ensure that each process can access only the memory that it has been allocated and prevent processes from accessing memory that belongs to other processes. The operating system uses techniques such as address translation and memory protection mechanisms to enforce memory protection and prevent processes from accessing unauthorized memory locations.

3. Provide two programming examples in which multi-threading does not provide better performance than a single-threaded solution

Ans. There are situations in which multithreading does not provide better performance than a single-threaded solution. Two programming examples are:

  1. Small tasks: If the tasks that need to be performed are very small, the overhead of creating and managing multiple threads can actually outweigh the benefits of parallelization. For example, consider a program that needs to sort an array of 10 elements. In this case, creating multiple threads to sort the array would likely result in slower performance than simply sorting the array in a single thread.
  2. Shared resources: If the threads in a multithreaded program need to access shared resources such as memory or files, managing access to these resources can become a bottleneck and limit the benefits of parallelization. For example, consider a program that reads data from a file, processes it, and writes the results to another file. If multiple threads are used to perform the processing, they may contend for access to the input and output files, leading to reduced performance compared to a single-threaded solution that processes the data sequentially.
In both of these examples, the overhead of creating and mananging can outweigh the benefits of parallelizaiton, leading to slower performance than a single-threaded solution.