Memory Management
- Memory is the important part of the computer that is used to store te data. Its management is critical
to the computer system because the amout of main memory available in a computer system is very limited.
- At any time, many processes are competing for it. Moreover, to increase performance, several processes
are executed simultaneously. For this, we must keep several processes in the main memory, so it is even
more important to manage them effectively.
Role of memory management
- Allocation: Memory management is responsible for allocating memory to processes or programs when
they request it. It tracks the availability of memory blocks and assigns them to processes based on
their requirements. This allocation process ensures that each program receives the necessary memory
to run correctly.
- Deallocation: When a process no longer needs memory, memory management deallocates the memory and
makes it available for other processes. This deallocation prevents memory leakage and optimizes
memory usage.
- Memory Organization: Memory management organizes the available memory space efficiently to
accommodate multiple processes. It divides the memory into fixed-size blocks or pages, creating a
logical structure that simplifies the allocation and deallocation process.
- Memory Protection: Memory management enforces memory protection mechanisms to ensure that processes
do not access memory locations that they are not authorized to access. It prevents one process from
interfering with another process's memory, which enhances security and stability.
- Memory Sharing: In certain cases, multiple processes may need to share memory resources. Memory
management facilitates memory sharing by allowing multiple processes to access the same memory
region, enabling efficient communication and data exchange between processes.
- Fragmentation Management: Memory management handles fragmentation, which can occur due to the
allocation and deallocation of memory blocks. It manages both internal fragmentation (unused memory
within allocated blocks) and external fragmentation (unused memory between allocated blocks) to
minimize wastage of memory.
Memory management Techniques:
The memory management techniques can be classified into following main categories:
- Contiguous memory management schemes
- Non-contiguous memory management schemes
What is a logical address?
- A logical address is an address that is generated by the CPU during program execution. The logical
address is a virtual address as it does not exist physically, and therefore, it is also known as a
Virtual Address. This address is used as a reference to access the physical memory location by CPU.
What is a physical address?
- The physical address identifies the physical location of required data in memory. The user never
directly deals with the physical address but can access it by its corresponding logical address. The
user program generates the logical address and thinks it is running in it, but the program needs
physical memory for its execution. Therefore, the logical address must be mapped to the physical
address by MMU before they are used. The Physical Address Space is used for all physical addresses
corresponding to the logical addresses in a logical address space.
Difference between contiguous and Non-contiguous memory allocation:
Contiguous memory allocation:
- Contiguous memory allocation is basically a method in which a single contiguous section/part of
memory is allocated to a process or file needing it. Because of this all the available memory
space resides at the same place together, which means that the freely/unused available memory
partitions are not distributed in a random fashion here and there across the whole memory space.
Non-contiguous memory allocation:
- Non-Contiguous memory allocation is basically a method on the contrary to contiguous allocation
method, allocates the memory space present in different locations to the process as per it’s
requirements. As all the available memory space is in a distributed pattern so the freely
available memory space is also scattered here and there. This technique of memory allocation
helps to reduce the wastage of memory, which eventually gives rise to Internal and external
fragmentation.
Memory allocation algorithms:
- Best fit: Selects the memory parition that best fits the size of the process. It minimizes
internal fragmentation.
- Worst fit: Selects the largest available memory parition to allocate the process. It maximizes
external fragmentation.
- First fit: Allocates the process to the first availale memory partition that is large enough.
Practise problems based on memory allocation algorithms:
Consider six memory partitions of size 200 KB, 400 KB, 600 KB, 500 KB, 300 KB and 250 KB. These
partitions need to be allocated to four processes of sizes 357 KB, 210 KB, 468 KB and 491 KB in that
order.
Perform the allocation of processes using:
- First fit algorithm
- Best Fit algorithm
- Worst file algorithm
Solution:
According to question,
The main memory has been divided into fixed size partitions as:
Let us say the given processes are:
- Process P1 = 357 KB
- Process P2 = 210 KB
- Process P3 = 468 KB
- Process P4 = 491 KB
Allocation using First Fit Algorithm:
- In First Fit Algorithm:
- Algorithm starts scanning the partitions serially.
- When a partition big enough to store the process is found, it allocates that
partition to the process.
- The allocation of partitions of the given processes is shown below:
P4 is not allocated because no partition size greater than or equal to the size of process P4 is
available.
Allocation using Best Fit Algorithm:
- In Best Fit Algorithm:
- Algorithm first scans all the partitions.
- It then allocates the partition of smallest size that can store the process.
- The allocation of paritions to the given processes is shown below:
Allocation using Worst Fit Algorithm:
- In Worst Fit Algorithm,
- Algorithm fist scans all the partitions.
- It then allocates the partitions of largest size to the process.
- The allocation of partitions to the given processes is shown below:
Paging
- In operating systems, paging is a storage mechanism used to retrieve processes from the secondary
storage into the main memory in the form of pages.
- The main idea behind the paging is to divide each process in the form of pages. The main memory will
also be divided in the form of frames.
- One page of the process is to be stored in one of the frames of the memory. The pages can be stored
at the different locations of the memory but the priority is always to find the contiguous frames or
holes.
-
Pages of the process are brought into the main memory only when they are required otherwise they
reside in the secondary storage.
What is virtual memory in OS?
- Virtual Memory is a storage scheme that provides user an illusion of having a very big main memory.
This is done by treating a part of secondary memory as the main memory.
- In this scheme, User can load the bigger size processes than the available main memory by having the
illusion that the memory is available to load the process.
- Instead of loading one big process in the main memory, the Operating System loads the different
parts of more than one process in the main memory.
- By doing this, the degree of multiprogramming will be increased and therefore, the CPU utilization
will also be increased.
How virtual memory works?
- In modern word, virtual memory has become quite common these days. In this scheme, whenever some
pages needs to be loaded in the main memory for the execution and the memory is not available for
those many pages, then in that case, instead of stopping the pages from entering in the main memory,
the OS search for the RAM area that are least used in the recent times or that
are not referenced and copy that into the secondary memory to make the space for the new pages in
the main memory.
- Since all this procedure happens automatically, therefore it makes the computer feel like it is
having the unlimited RAM.
Demand Paging
- Demand Paging is a popular method of virtual memory management. In demand paging, the pages of a
process which are least used, get stored in the secondary memory.
- A page is copied to the main memory when its demand is made or page fault occurs. There are various
page replacement algorithms which are used to determine the pages which will be replaced. We will
discuss each one of them later in detail.
Advantages of virtual memory
- The degree of Multiprogramming will be increased.
- User can run large application with less real RAM.
- There is no need to buy more memory RAMs.
Disadvantages of Virtual memory
- The system becomes slower since swapping takes time.
- It takes more time in switching between applications.
- The user will have the lesser hard disk space for its use.
Page replacement algorithms
- For process to be executed the process should be in main memory.
- We know that the size of main memory is finite/limited.
- Sometime it may happen that the size of process might be larger than main memory, then we cannot
load complete process to main memory. Then it is better to load only main portion of process which
is required in main memory and this type of technique is known as demand paging.
- By loading only main portion of process then there can recide many processes in main memory.
- To implement this, we divide process into small equal size portion called pages.
- Using demanding paging concept we can implement virtual memory concept.
- If the needed page is not in main memory then it is known as page fault.
- Operating system will load that needed page in main memory, now suppose if main memory is full then
we can't place the required page in main memory then OS have to replace page that are already
existing in main memory so that requested page can have a frame in main memory.
- Now to decide which page should be replaced, we use page replacement algorithms.
- Following are the page replacement algorithms:
- FIFO
- LRU
- Optimal
FIFO page replacement algorithm
LRU page replacement algorithm
Optimal page replacement algorithm
- Video lecture ⇗
- Replace a page that will not be used in near future.
- Optimal algorithm will give you least number of page fault.
File Systems
- File system is the part of the operating system which is responsible for file management. It
provides a mechanism to store the data and access to the file contents including data and programs.
Some operating systems treats everything as a file for example Ubuntu.
- The file system take care of the following issues
- File structure:
We have seen various data structures in which the file can be stored.
The task of the file system is to maintain an optimal file structure.
- Recovering free space:
Whenever a file gets deleted from the hard disk, there is a free spaced created in the
disk. There can be may such spaces which need to be recovered in order to reallocate them to
other files.
- Disk space assignment to the files:
The major concern about the file is decidding where
to store the files on the hard disk.
- Tracking data location:
A File may or may not be stored within only one block. It can be stored in the non
contiguous blocks on the disk. We need to keep track of all the blocks on which the part of
the files reside.
File System structure
- File system provide efficient access to the disk by allowing data to be stored, located and
retrieved in a convenient way. A file system must be able to store the file, locate and retrieve the
file.
- Most of the Operating Systems use layering approach for every task including file systems. Every
layer of the file system is responsible for some activities.
- The image shown below, elaborates how the file system is divided in different layers, and also the
functionality of each layer.
Storage structure in Operating systems
- Basically we want the programs and data to reside in main memory permanently.
- This arrangement is usually not possible for the following two reasons:
- Main memory is usually too small to store all needed programs and data permanently.
- Main memory is a volatile storage device that loses its contents when power is turned
off or otherwise lost.
There are two types of storage devices
- Volatile Storage Device: It loses its contents when the power of the device is removed.
- Non-Volatile Storage device: It does not loses its contents when the power is removed. It
holds all the data when the power is removed.
- Secondary Storage is used as an extension of main memory. Secondary storage devices can hold the
data permanently.
- Storage devices consists of Registers, Cache, Main-Memory, Electronic-Disk, Magnetic- Disk,
Optical-Disk, Magnetic-Tapes. Each storage system provides the basic system of storing a datum
and of holding the datum until it is retrieved at a later time. All the storage devices differ
in speed, cost, size and volatility. The most common Secondary-storage device is a
Magnetic-disk, which provides storage for both programs and data.
In this fig Hierarchy of storage is shown below ↓
↑ Storage device hierarchy
- In this hierarchy all the storage devices are arranged according to speed and cost. The higher
levels are expensive, but they are fast. As we move down the hierarchy, the cost per bit
generally decreases, where as the access time generally increases.
- The storage systems above the Electronic disk are Volatile, where as those below are Non- V
olatile.
File access methods
- When a file is used, information is read and accessed into computer memory and there are several
ways to access this information of the file. Some systems provide only one access method for
files. Other systems, such as those of IBM, support many access methods, and choosing the right
one for a particular application is a major design problem.
-
There are three ways to access a file into a computer system:
- Sequential-Access
- Direct Access
- Index sequential Method
Sequential-Access
- It is the simplest access method. Information in the file is processed in order, one record
after the other. This mode of access is by far the most common; for example, editor and
compiler usually access the file in this fashion.
Advantages of Sequential Access Method:
- It is simple to implement this file access mechanism.
- It is suitable for applications that require access to all records in a file, in a
specific
order.
- It is less prone to data corruption as the data is written sequentially and not
randomly.
- It is a more efficient method for reading large files, as it only reads the required
data
and does not waste time reading unnecessary data.
- It is a reliable method for backup and restore operations, as the data is stored
sequentially and can be easily restored if required.
Disadvantages of Sequential Access Method:
- It does not allow for quick access to specific records in the file. The entire file must
be searched sequentially to find a specific record, which can be time-consuming.
- It is not well-suited for applications that require frequent updates or modifications to the
file. Updating or inserting a record in the middle of a large file can be a slow and
cumbersome process.
- Sequential access can also result in wasted storage space if records are of varying lengths.
The space between records cannot be used by other records, which can result in inefficient
use of storage.
Direct Access
- Another method is direct access method also known as relative access method.
- A fixed-length logical record that allows the program to read and write record rapidly, in
no particular order.
- The direct access is based on the disk model of a file since disk allows random access to
any file block.
Advantages of Direct Access Method:
- The files can be immediately accessed decreasing the average access time.
- In the direct access method, in order to access a block, there is no need of traversing
all the blocks present before it.
Index sequential method
- It is the other method of accessing a file that is built on the top of the sequential access
method.
- These methods construct an index for the file. The index, like an index in the back of a
book, contains the pointer to the various blocks. To find a record in the file, we first
search the index, and then by the help of pointer we access the file directly.
- Key points:
- It is built on top of Sequential access.
- It control the pointer by using index.
Relative Record Access
- Relative record access is a file access method used in operating systems where records are
accessed relative to the current position of the file pointer.
- In this method, records are located based on their position relative to the current record,
rather than by a specific address or key value.
- Key Points of Relative Record Access:
- Relative record access is a random access method that allows records to be accessed
based on their position relative to the current record.
- This method is efficient for accessing individual records but may not be suitable
for files that require frequent updates or random access to specific records.
- Relative record access requires fixed-length records and may not be flexible enough
for some applications.
- This method is useful for processing records in a specific order or for files that
are accessed sequentially.
Advantages of Relative Record Access:
- Random Access: Relative record access allows random access to records in a file. The system
can access any record at a specific offset from the current position of the file pointer.
- Efficient Retrieval: Since the system only needs to read the current record and any records
that need to be skipped, relative record access is more efficient than sequential access for
accessing individual records.
- Useful for Sequential Processing: Relative record access is useful for processing records in
a specific order. For example, if the records are sorted in a specific order, the system can
access the next or previous record relative to the current position of the file pointer.
Disadvantages of Relative Record Access:
- Fixed Record Length: Relative record access requires fixed-length records. If the records
are of
varying length, it may be necessary to use padding to ensure that each record is the same
length.
-
Limited Flexibility: Relative record access is not very flexible. It is difficult to insert
or delete records in the middle of a file without disrupting the relative positions of other
records.
- Limited Application: Relative record access is best suited for files that are accessed
sequentially or with some regularity, but it may not be appropriate for files that are
frequently updated or require random access to specific records.
Disk Scheduling
- Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk. Disk
scheduling is also known as I/O scheduling.
- Disk scheduling is important because:
- Multiple I/O requests may arrive by different processes and only one I/O request can be
served at a time by the disk controller. Thus other I/O requests need to wait in the waiting
queue and need to be scheduled.
- Two or more requests may be far from each other so it can result in greater disk arm
movement.
- Hard drives are one of the slowest parts of the computer system and thus need to be
accessed in an efficient manner.
Important terminologies
- Seek time : Seek time is the time taken to locate the disk arm to a specified track where the
data is to be read or write. So the disk scheduling algorithm that gives minimum average
seek time is better.
- Rotational latency : Rotational Latency is the time taken by the desired sector of disk
to rotate into a position so that it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.
- Transfer Time : Transfer time is the time to transfer the data. It depends on the
rotating speed of the disk and number of bytes to be transferred.
- Disk Access Time = Seek time + Rotational latency + Transfer time
& Total seek time = Total head movement * seek time.
- Disk Response Time : Response Time is the average of time spent by a request waiting to
perform its I/O operation. Average Response time is the response time of the all requests.
Variance Response Time is measure of how individual request are serviced with respect to average
response time. So the disk scheduling algorithm that gives minimum variance response time is
better.
Now we are going to discuss various Disk Scheduling Algorithms
1. FCFS
- FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed
in the order they arrive in the disk queue.
- Let us understand this with the help of an example.
- So, total overhead movement (total distance covered by the disk arm) : = ( 170 - 50 ) + ( 170 -
43 ) + ( 140 - 43 ) + ( 140 - 16 ) + ( 190 - 16 )= 642
Advantages:
- Every request gets a fair chance
- No indefinite postponement
Disadvantages:
- Does not try to optimize seek time.
- May not provide the best possible service.
2. SSTF
- In SSTF (Shortest Seek Time First), requests having shortest seek time are
executed first. So, the seek time of every request is calculated in advance in the queue and
then they are scheduled according to their calculated seek time.
- As a result, the request near the disk arm will get executed first. SSTF is certainly an
improvement over FCFS as it decreases the average response time and increases the
throughput of system. Let us understand this with the help of an example.
- So,
total overhead movement (total distance covered by the disk arm) = ( 50 - 16 ) + ( 190 - 16 ) =
208
Advantages:
- Average Response Time decreases
- Throughput increases
Disadvantages:
- Overhead to calculate seek time in advance
- Can cause Starvation for a request if it has a higher seek time as compared to
incoming requests
- High variance of response time as SSTF favors only some requests
3. SCAN
- In SCAN algorithm the disk arm moves in a particular direction and services the requests coming
in its path and after reaching the end of the disk, it reverses its direction and again services
the request arriving in its path.
- So, this algorithm works as an elevator and is hence also known as an elevator algorithm. As a
result, the requests at the midrange are serviced more and those arriving behind the disk arm
will have to wait.
- Therefore, the total overhead movement (total distance covered by the disk arm): ( 199 - 50 ) +
( 199 - 16 ) = 322
Advantages:
- High throughput
- Low variance of response time
- Average response time
Disadvantages:
- Long waiting time for requests for locations just visited by disk arm.
4. CSCAN
- In SCAN algorithm, the disk arm again scans the path that has been
scanned, after reversing its direction. So, it may be possible that too many requests are
waiting at the other end or there may be zero or few requests pending at the scanned area.
- These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing its
direction goes to the other end of the disk and starts servicing the requests from there. So,
the disk arm moves in a circular fashion and this algorithm is also similar to SCAN algorithm
and
hence it is known as C-SCAN (Circular SCAN).
- so, the total overhead movement (total distance covered by the disk arm) is calculated as : (
199 - 50 ) + ( 199 - 0 ) + ( 43 - 0 ) = 391
5. LOOK
- It is similar to the SCAN disk scheduling algorithm except for the
difference that the disk arm in spite of going to the end of the disk goes only to the last
request to be serviced in front of the head and then reverses its direction from there only.
Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the
disk.
- So, the total overhead movement (total distance covered by the disk arm) is calculated as : (
190 - 50 ) + ( 190 - 16 ) = 314
6. CLOOK
- As LOOK is similar to SCAN algorithm, in similar way, CLOOK is similar to CSCAN disk scheduling
algorithm.
- In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced
in front of the head and then from there goes to the other end’s last request. Thus, it also
prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.
- So, the total overhead movement (total distance covered by the disk arm) is calculated as : (
190 - 50 ) + ( 190 - 16 ) + ( 43 - 16 ) = 341
The range of services and add-ons provided by modern operating systems is constantly expanding, and four
basic operating system management functions are implemented by all operating systems. These management
functions are briefly described below and given the following overall context. The four main operating
system management functions (each of which are dealt with in more detail in different places) are:
- Process Management
- Memory Management
- File and Disk Management
- I/O System Management
Most computer systems employ secondary storage devices (magnetic disks). It provides low- cost,
non-volatile storage for programs and data (tape, optical media, flash drives, etc.). Programs and the
user data they use are kept on separate storage devices called files. The operating system is
responsible for allocating space for files on secondary storage media as needed.
There is no guarantee that files will be stored in contiguous locations on physical disk drives,
especially large files. It depends greatly on the amount of space available. When the disc is full,
new files are more likely to be recorded in multiple locations. However, as far as the user is
concerned, the example file provided by the operating system hides the fact that the file is fragmented
into multiple parts.
The operating system needs to track the location of the disk for every part of every file on the
disk. In some cases, this means tracking hundreds of thousands of files and file fragments on a single
physical disk. Additionally, the operating system must be able to locate each file and perform read and
write operations on it whenever it needs to. Therefore, the operating system is responsible for
configuring the file system, ensuring the safety and reliability of reading and write operations to
secondary storage, and maintains access times (the time required to write data to or read data from
secondary storage).
Disk management of the operating system includes:
- Disk Format
- Booting from disk
- Bad lock recovery
The low-level format or physical format:
- Divides the disk into sectors before storing data so that the disk controller can read and write
Each sector can be:
- The header retains information, data, and error correction code (ECC) sectors of data, typically
512 bytes of data, but optional disks use the operating system’s own data structures to preserve
files using disks.
- It is conducted in two stages:
- Divide the disc into multiple cylinder groups. Each is treated as a logical disk.
- Logical format or “Create File System”. The OS stores the data structure of the first
file system on the disk. Contains free space and allocated space.
- For efficiency, most file systems group blocks into clusters. Disk I / O runs in blocks. File I
/ O runs in a cluster.
- For example, the sizes can be 256,512, and 1,024 bytes. If disk is formatted with larger sector
size, fewer sectors can fit on each track.
- As a result fewer headers and trailers are written on each track and more space is obtainable
for
user data. – Some operating systems can handle a sector size of 512 bytes.
- Operating system keeps its own data structures on disk before it use disk to store the files. It
performs this with following two steps:
- It partitions the disk into one or more groups of cylinders. Each partition is treated
by OS as a separate disk.
- Logical formatting: That means creation of file system.
- In order to increase the efficiency, file system groups blocks in chunks called as clusters.
- Some operating systems give special programs the ability to use a disk partition as a large
sequential array of logical blocks, without any file-system data structures. This array is
sometimes called the raw disk, and I/O to this array is called as raw I/O.
Boot block:
- When the computer is turned on or restarted, the program stored in the initial bootstrap ROM
finds the location of the OS kernel from the disk, loads the kernel into memory, and runs the OS
start.
- To change the bootstrap code, you need to change the ROM and hardware chip. Only a small
bootstrap loader program is stored in ROM instead.
- The full bootstrap code is stored in the “boot block” of the disk.
- A disk with a boot partition is called a boot disk or system disk.
- The bootstrap program is required for a computer to initiate the booting after it is
powered up or rebooted.
- It initializes all components of the system, from CPU registers to device controllers
and the contents of main memory, and then starts the operating system.
- The bootstrap program then locates the OS kernel on disk, loads that kemel into
memory, and jumps to an initial address to start the operating-system execution.
- The Read Only Memory (ROM) does not require initialization and is at a fixed location that the
processor can begin executing when powered up or reset. Therefore
bootstrap is stored in ROM.
- Because of read only feature of ROM; it cannot be infected by a computer virus. The
difficulty is that modification of this bootstrap code requires changing the ROM
hardware chips.
- Therefore, most systems store a small bootstrap loader program in the boot ROM
which invokes and bring full bootstrap program from disk into main memory.
- The modified version of full bootstrap program can be simply written onto the disk.
- The fixed storage location of full bootstrap program is in the “boot blocks”.
- A disk that has a boot partition is called a boot disk or system disk.
Bad Blocks:
- Disks are error-prone because moving parts have small tolerances.
- Most disks are even stuffed from the factory with bad blocks and are handled in a variety of
ways.
- The controller maintains a list of bad blocks.
- The controller can instruct each bad sector to be logically replaced with one of the spare
sectors. This scheme is known as sector sparing or transfer.
- A soft error triggers the data recovery process.
- However, unrecoverable hard errors may result in data loss and require manual intervention.
- Failure of the disk can be:
- Complete, means there is no way other than replacing the disk. Back up of content must
be taken on new disk.
- One or more sectors become faulty.
- After manufacturing, the bad blocks exist. Depending on the disk and controller in use,
these blocks are handled in a different ways.
Disk management in operating systems involves organizing and maintaining the data on a storage device,
such as a hard disk drive or solid-state drive. The main goal of disk management is to efficiently
utilize the available storage space and ensure data integrity and security.
Some common disk management techniques used in operating systems include:
- Partitioning: This involves dividing a single physical disk into multiple logical
partitions. Each partition can be treated as a separate storage device, allowing for
better organization and management of data.
- Formatting: This involves preparing a disk for use by creating a file system on it.
This process typically erases all existing data on the disk.
- File system management: This involves managing the file systems used by the
operating system to store and access data on the disk. Different file systems have
different features and performance characteristics.
- Disk space allocation: This involves allocating space on the disk for storing files
and directories. Some common methods of allocation include contiguous allocation,
linked allocation, and indexed allocation.
- Disk defragmentation: Over time, as files are created and deleted, the data on a disk
can become fragmented, meaning that it is scattered across the disk. Disk defragmentation involves
rearranging the data on the disk to improve performance.
Advantages of disk management include:
- Improved organization and management of data.
- Efficient use of available storage space.
- Improved data integrity and security.
- Improved performance through techniques such as defragmentation.
Disadvantages of disk management include:
- Increased system overhead due to disk management tasks.
- Increased complexity in managing multiple partitions and file systems.
- Increased risk of data loss due to errors during disk management tasks.
- Overall, disk management is an essential aspect of operating system management
and can greatly improve system performance and data integrity when implemented properly.
Previous Years Questions
Q- With a diagram, discuss the steps involved in handling a page fault.
Q- Explain the contiguous and non-contiguous memory allocation.
Q- Memory paritions of 100kb, 500 kb, 200 kb, 300 kb, 600 kb are available how would best, worst, first
fit algorithm to place process 212, 417, 112, 426 in order? Which is the best algorithm?
Solution:
And the given processes are:
- Process P1 = 212 KB
- Process P2 = 417 KB
- Process P3 = 112 KB
- Process P4 = 426 KB
Allocation using First Fit Algorithm:
Allocation using Best Fit Algorithm:
Allocation using Worst Fit Algorithm:
- Best Fit Algorithm is the best algorithm because all the process get memory allocation to be
executed.
Q- How many page fault occur for FIFO, LRU and Optimal page replacement algorithm with following
reference string for four-page frames:
1, 2, 3, 4, 5, 3, 4, 1, 6, 7, 8, 7, 9, 7, 8, 9, 5, 4, 5, 4, 2
Q- The queue of requests in FIFO is 86, 147, 91, 177, 94, 150, 102, 175, 130. What is the total head
movement needed to satisfy the requests for the following Scheduling algorithms:
FCFS, SSTF, SCAN, LOOK, C-SCAN.
Solution:
The queue of requests in FIFO is 86, 147, 91, 177, 94, 150, 102, 175, 130
let's assume the initial position of the head is at track 100
FCFS ↓
- So, total overhead movement (total distance covered by the disk arm) = (100 - 86) + (147 - 86) +
(147 - 91) + (177 - 91) + (177 - 94) + (150 - 94) + (150 - 102) + (175 - 102) + (175 - 130) =
522
SSTF ↓
- So, total overhead movement (total distance covered by the disk arm) = (102 - 100) + (102 - 86)
+ (177 - 86) = 109
SCAN ↓
- Suppose we are moving towards larger value.
- So, total overhead movement (total distance covered by the disk arm) = (180 - 100) + (180 - 86)
= 174
C-SCAN ↓
- Suppose we are moving towards larger value.
- So, total overhead movement (total distance covered by the disk arm) = (180-100) + (180 - 0) +
(94 - 0) = 354
LOOK ↓
- Suppose we are moving towards larger value.
- So, total overhead movement (total distance covered by the disk arm) = (180 - 100) + (180 - 86)
= 174