× back Role of memory management Memory management techniques What is logical address? Difference between contiguous and non-contiguous memory allocation Paging Virtual memory Page replacement algorithm File systems File system structure Disk scheduling
Next Topic → ← Previous Topic

Memory Management

Role of memory management

  1. Allocation: Memory management is responsible for allocating memory to processes or programs when they request it. It tracks the availability of memory blocks and assigns them to processes based on their requirements. This allocation process ensures that each program receives the necessary memory to run correctly.
  2. Deallocation: When a process no longer needs memory, memory management deallocates the memory and makes it available for other processes. This deallocation prevents memory leakage and optimizes memory usage.
  3. Memory Organization: Memory management organizes the available memory space efficiently to accommodate multiple processes. It divides the memory into fixed-size blocks or pages, creating a logical structure that simplifies the allocation and deallocation process.
  4. Memory Protection: Memory management enforces memory protection mechanisms to ensure that processes do not access memory locations that they are not authorized to access. It prevents one process from interfering with another process's memory, which enhances security and stability.
  5. Memory Sharing: In certain cases, multiple processes may need to share memory resources. Memory management facilitates memory sharing by allowing multiple processes to access the same memory region, enabling efficient communication and data exchange between processes.
  6. Fragmentation Management: Memory management handles fragmentation, which can occur due to the allocation and deallocation of memory blocks. It manages both internal fragmentation (unused memory within allocated blocks) and external fragmentation (unused memory between allocated blocks) to minimize wastage of memory.

Memory management Techniques:

The memory management techniques can be classified into following main categories:

What is a logical address?

What is a physical address?

Difference between contiguous and Non-contiguous memory allocation:

Contiguous memory allocation:

  • Contiguous memory allocation is basically a method in which a single contiguous section/part of memory is allocated to a process or file needing it. Because of this all the available memory space resides at the same place together, which means that the freely/unused available memory partitions are not distributed in a random fashion here and there across the whole memory space.

Non-contiguous memory allocation:

  • Non-Contiguous memory allocation is basically a method on the contrary to contiguous allocation method, allocates the memory space present in different locations to the process as per it’s requirements. As all the available memory space is in a distributed pattern so the freely available memory space is also scattered here and there. This technique of memory allocation helps to reduce the wastage of memory, which eventually gives rise to Internal and external fragmentation.

Memory allocation algorithms:

  1. Best fit: Selects the memory parition that best fits the size of the process. It minimizes internal fragmentation.
  2. Worst fit: Selects the largest available memory parition to allocate the process. It maximizes external fragmentation.
  3. First fit: Allocates the process to the first availale memory partition that is large enough.

Practise problems based on memory allocation algorithms:

Consider six memory partitions of size 200 KB, 400 KB, 600 KB, 500 KB, 300 KB and 250 KB. These partitions need to be allocated to four processes of sizes 357 KB, 210 KB, 468 KB and 491 KB in that order.
Perform the allocation of processes using:

  1. First fit algorithm
  2. Best Fit algorithm
  3. Worst file algorithm

Solution:
According to question,
The main memory has been divided into fixed size partitions as:

Let us say the given processes are:

  • Process P1 = 357 KB
  • Process P2 = 210 KB
  • Process P3 = 468 KB
  • Process P4 = 491 KB

Allocation using First Fit Algorithm:

  • In First Fit Algorithm:
    • Algorithm starts scanning the partitions serially.
    • When a partition big enough to store the process is found, it allocates that partition to the process.
    • The allocation of partitions of the given processes is shown below:

P4 is not allocated because no partition size greater than or equal to the size of process P4 is available.

Allocation using Best Fit Algorithm:

  • In Best Fit Algorithm:
    • Algorithm first scans all the partitions.
    • It then allocates the partition of smallest size that can store the process.
  • The allocation of paritions to the given processes is shown below:

Allocation using Worst Fit Algorithm:

  • In Worst Fit Algorithm,
    • Algorithm fist scans all the partitions.
    • It then allocates the partitions of largest size to the process.
  • The allocation of partitions to the given processes is shown below:

Paging

What is virtual memory in OS?

How virtual memory works?

Demand Paging

Advantages of virtual memory

Disadvantages of Virtual memory

Page replacement algorithms

FIFO page replacement algorithm

LRU page replacement algorithm

  • Video lecture ⇗
  • Least recently used.
  • Replace a page which is not in use for a long time.

Optimal page replacement algorithm

  • Video lecture ⇗
  • Replace a page that will not be used in near future.
  • Optimal algorithm will give you least number of page fault.

File Systems

File System structure

Storage structure in Operating systems

  • Basically we want the programs and data to reside in main memory permanently.
  • This arrangement is usually not possible for the following two reasons:
    1. Main memory is usually too small to store all needed programs and data permanently.
    2. Main memory is a volatile storage device that loses its contents when power is turned off or otherwise lost.

There are two types of storage devices

  1. Volatile Storage Device: It loses its contents when the power of the device is removed.
  2. Non-Volatile Storage device: It does not loses its contents when the power is removed. It holds all the data when the power is removed.
  • Secondary Storage is used as an extension of main memory. Secondary storage devices can hold the data permanently.
  • Storage devices consists of Registers, Cache, Main-Memory, Electronic-Disk, Magnetic- Disk, Optical-Disk, Magnetic-Tapes. Each storage system provides the basic system of storing a datum and of holding the datum until it is retrieved at a later time. All the storage devices differ in speed, cost, size and volatility. The most common Secondary-storage device is a Magnetic-disk, which provides storage for both programs and data.
    In this fig Hierarchy of storage is shown below ↓

↑ Storage device hierarchy

  • In this hierarchy all the storage devices are arranged according to speed and cost. The higher levels are expensive, but they are fast. As we move down the hierarchy, the cost per bit generally decreases, where as the access time generally increases.
  • The storage systems above the Electronic disk are Volatile, where as those below are Non- V olatile.

File access methods

  • When a file is used, information is read and accessed into computer memory and there are several ways to access this information of the file. Some systems provide only one access method for files. Other systems, such as those of IBM, support many access methods, and choosing the right one for a particular application is a major design problem.
  • There are three ways to access a file into a computer system:
    1. Sequential-Access
    2. Direct Access
    3. Index sequential Method

Sequential-Access

  • It is the simplest access method. Information in the file is processed in order, one record after the other. This mode of access is by far the most common; for example, editor and compiler usually access the file in this fashion.

Advantages of Sequential Access Method:

  • It is simple to implement this file access mechanism.
  • It is suitable for applications that require access to all records in a file, in a specific order.
  • It is less prone to data corruption as the data is written sequentially and not randomly.
  • It is a more efficient method for reading large files, as it only reads the required data and does not waste time reading unnecessary data.
  • It is a reliable method for backup and restore operations, as the data is stored sequentially and can be easily restored if required.

Disadvantages of Sequential Access Method:

  • It does not allow for quick access to specific records in the file. The entire file must be searched sequentially to find a specific record, which can be time-consuming.
  • It is not well-suited for applications that require frequent updates or modifications to the file. Updating or inserting a record in the middle of a large file can be a slow and cumbersome process.
  • Sequential access can also result in wasted storage space if records are of varying lengths. The space between records cannot be used by other records, which can result in inefficient use of storage.

Direct Access

  • Another method is direct access method also known as relative access method.
  • A fixed-length logical record that allows the program to read and write record rapidly, in no particular order.
  • The direct access is based on the disk model of a file since disk allows random access to any file block.

Advantages of Direct Access Method:

  • The files can be immediately accessed decreasing the average access time.
  • In the direct access method, in order to access a block, there is no need of traversing all the blocks present before it.

Index sequential method

  • It is the other method of accessing a file that is built on the top of the sequential access method.
  • These methods construct an index for the file. The index, like an index in the back of a book, contains the pointer to the various blocks. To find a record in the file, we first search the index, and then by the help of pointer we access the file directly.
  • Key points:
    • It is built on top of Sequential access.
    • It control the pointer by using index.

Relative Record Access

  • Relative record access is a file access method used in operating systems where records are accessed relative to the current position of the file pointer.
  • In this method, records are located based on their position relative to the current record, rather than by a specific address or key value.
  • Key Points of Relative Record Access:
    • Relative record access is a random access method that allows records to be accessed based on their position relative to the current record.
    • This method is efficient for accessing individual records but may not be suitable for files that require frequent updates or random access to specific records.
    • Relative record access requires fixed-length records and may not be flexible enough for some applications.
    • This method is useful for processing records in a specific order or for files that are accessed sequentially.

Advantages of Relative Record Access:

  • Random Access: Relative record access allows random access to records in a file. The system can access any record at a specific offset from the current position of the file pointer.
  • Efficient Retrieval: Since the system only needs to read the current record and any records that need to be skipped, relative record access is more efficient than sequential access for accessing individual records.
  • Useful for Sequential Processing: Relative record access is useful for processing records in a specific order. For example, if the records are sorted in a specific order, the system can access the next or previous record relative to the current position of the file pointer.

Disadvantages of Relative Record Access:

  • Fixed Record Length: Relative record access requires fixed-length records. If the records are of varying length, it may be necessary to use padding to ensure that each record is the same length.
  • Limited Flexibility: Relative record access is not very flexible. It is difficult to insert or delete records in the middle of a file without disrupting the relative positions of other records.
  • Limited Application: Relative record access is best suited for files that are accessed sequentially or with some regularity, but it may not be appropriate for files that are frequently updated or require random access to specific records.

Disk Scheduling

Important terminologies

  • Seek time : Seek time is the time taken to locate the disk arm to a specified track where the data is to be read or write. So the disk scheduling algorithm that gives minimum average seek time is better.
  • Rotational latency : Rotational Latency is the time taken by the desired sector of disk to rotate into a position so that it can access the read/write heads. So the disk scheduling algorithm that gives minimum rotational latency is better.
  • Transfer Time : Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and number of bytes to be transferred.
  • Disk Access Time = Seek time + Rotational latency + Transfer time
    & Total seek time = Total head movement * seek time.
  • Disk Response Time : Response Time is the average of time spent by a request waiting to perform its I/O operation. Average Response time is the response time of the all requests. Variance Response Time is measure of how individual request are serviced with respect to average response time. So the disk scheduling algorithm that gives minimum variance response time is better.

Now we are going to discuss various Disk Scheduling Algorithms

1. FCFS

  • FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order they arrive in the disk queue.
  • Let us understand this with the help of an example.
  • So, total overhead movement (total distance covered by the disk arm) : = ( 170 - 50 ) + ( 170 - 43 ) + ( 140 - 43 ) + ( 140 - 16 ) + ( 190 - 16 )= 642

Advantages:

  • Every request gets a fair chance
  • No indefinite postponement

Disadvantages:

  • Does not try to optimize seek time.
  • May not provide the best possible service.

2. SSTF

  • In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So, the seek time of every request is calculated in advance in the queue and then they are scheduled according to their calculated seek time.
  • As a result, the request near the disk arm will get executed first. SSTF is certainly an improvement over FCFS as it decreases the average response time and increases the throughput of system. Let us understand this with the help of an example.
  • So, total overhead movement (total distance covered by the disk arm) = ( 50 - 16 ) + ( 190 - 16 ) = 208

Advantages:

  • Average Response Time decreases
  • Throughput increases

Disadvantages:

  • Overhead to calculate seek time in advance
  • Can cause Starvation for a request if it has a higher seek time as compared to incoming requests
  • High variance of response time as SSTF favors only some requests

3. SCAN

  • In SCAN algorithm the disk arm moves in a particular direction and services the requests coming in its path and after reaching the end of the disk, it reverses its direction and again services the request arriving in its path.
  • So, this algorithm works as an elevator and is hence also known as an elevator algorithm. As a result, the requests at the midrange are serviced more and those arriving behind the disk arm will have to wait.
  • Therefore, the total overhead movement (total distance covered by the disk arm): ( 199 - 50 ) + ( 199 - 16 ) = 322

Advantages:

  • High throughput
  • Low variance of response time
  • Average response time

Disadvantages:

  • Long waiting time for requests for locations just visited by disk arm.

4. CSCAN

  • In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few requests pending at the scanned area.
  • These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing its direction goes to the other end of the disk and starts servicing the requests from there. So, the disk arm moves in a circular fashion and this algorithm is also similar to SCAN algorithm and hence it is known as C-SCAN (Circular SCAN).
  • so, the total overhead movement (total distance covered by the disk arm) is calculated as : ( 199 - 50 ) + ( 199 - 0 ) + ( 43 - 0 ) = 391

5. LOOK

  • It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.
  • So, the total overhead movement (total distance covered by the disk arm) is calculated as : ( 190 - 50 ) + ( 190 - 16 ) = 314

6. CLOOK

  • As LOOK is similar to SCAN algorithm, in similar way, CLOOK is similar to CSCAN disk scheduling algorithm.
  • In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in front of the head and then from there goes to the other end’s last request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.
  • So, the total overhead movement (total distance covered by the disk arm) is calculated as : ( 190 - 50 ) + ( 190 - 16 ) + ( 43 - 16 ) = 341

The range of services and add-ons provided by modern operating systems is constantly expanding, and four basic operating system management functions are implemented by all operating systems. These management functions are briefly described below and given the following overall context. The four main operating system management functions (each of which are dealt with in more detail in different places) are:

Most computer systems employ secondary storage devices (magnetic disks). It provides low- cost, non-volatile storage for programs and data (tape, optical media, flash drives, etc.). Programs and the user data they use are kept on separate storage devices called files. The operating system is responsible for allocating space for files on secondary storage media as needed.

There is no guarantee that files will be stored in contiguous locations on physical disk drives, especially large files. It depends greatly on the amount of space available. When the disc is full, new files are more likely to be recorded in multiple locations. However, as far as the user is concerned, the example file provided by the operating system hides the fact that the file is fragmented into multiple parts.

The operating system needs to track the location of the disk for every part of every file on the disk. In some cases, this means tracking hundreds of thousands of files and file fragments on a single physical disk. Additionally, the operating system must be able to locate each file and perform read and write operations on it whenever it needs to. Therefore, the operating system is responsible for configuring the file system, ensuring the safety and reliability of reading and write operations to secondary storage, and maintains access times (the time required to write data to or read data from secondary storage).

Disk management of the operating system includes:

The low-level format or physical format:

  • Divides the disk into sectors before storing data so that the disk controller can read and write Each sector can be:
  • The header retains information, data, and error correction code (ECC) sectors of data, typically 512 bytes of data, but optional disks use the operating system’s own data structures to preserve files using disks.
  • It is conducted in two stages:
    1. Divide the disc into multiple cylinder groups. Each is treated as a logical disk.
    2. Logical format or “Create File System”. The OS stores the data structure of the first file system on the disk. Contains free space and allocated space.
  • For efficiency, most file systems group blocks into clusters. Disk I / O runs in blocks. File I / O runs in a cluster.
  • For example, the sizes can be 256,512, and 1,024 bytes. If disk is formatted with larger sector size, fewer sectors can fit on each track.
  • As a result fewer headers and trailers are written on each track and more space is obtainable for user data. – Some operating systems can handle a sector size of 512 bytes.
  • Operating system keeps its own data structures on disk before it use disk to store the files. It performs this with following two steps:
    1. It partitions the disk into one or more groups of cylinders. Each partition is treated by OS as a separate disk.
    2. Logical formatting: That means creation of file system.
  • In order to increase the efficiency, file system groups blocks in chunks called as clusters.
  • Some operating systems give special programs the ability to use a disk partition as a large sequential array of logical blocks, without any file-system data structures. This array is sometimes called the raw disk, and I/O to this array is called as raw I/O.

Boot block:

  • When the computer is turned on or restarted, the program stored in the initial bootstrap ROM finds the location of the OS kernel from the disk, loads the kernel into memory, and runs the OS start.
  • To change the bootstrap code, you need to change the ROM and hardware chip. Only a small bootstrap loader program is stored in ROM instead.
  • The full bootstrap code is stored in the “boot block” of the disk.
  • A disk with a boot partition is called a boot disk or system disk.
  • The bootstrap program is required for a computer to initiate the booting after it is powered up or rebooted.
  • It initializes all components of the system, from CPU registers to device controllers and the contents of main memory, and then starts the operating system.
  • The bootstrap program then locates the OS kernel on disk, loads that kemel into memory, and jumps to an initial address to start the operating-system execution.
  • The Read Only Memory (ROM) does not require initialization and is at a fixed location that the processor can begin executing when powered up or reset. Therefore bootstrap is stored in ROM.
  • Because of read only feature of ROM; it cannot be infected by a computer virus. The difficulty is that modification of this bootstrap code requires changing the ROM hardware chips.
  • Therefore, most systems store a small bootstrap loader program in the boot ROM which invokes and bring full bootstrap program from disk into main memory.
  • The modified version of full bootstrap program can be simply written onto the disk.
  • The fixed storage location of full bootstrap program is in the “boot blocks”.
  • A disk that has a boot partition is called a boot disk or system disk.

Bad Blocks:

  • Disks are error-prone because moving parts have small tolerances.
  • Most disks are even stuffed from the factory with bad blocks and are handled in a variety of ways.
  • The controller maintains a list of bad blocks.
  • The controller can instruct each bad sector to be logically replaced with one of the spare sectors. This scheme is known as sector sparing or transfer.
  • A soft error triggers the data recovery process.
  • However, unrecoverable hard errors may result in data loss and require manual intervention.
  • Failure of the disk can be:
    1. Complete, means there is no way other than replacing the disk. Back up of content must be taken on new disk.
    2. One or more sectors become faulty.
    3. After manufacturing, the bad blocks exist. Depending on the disk and controller in use, these blocks are handled in a different ways.

Disk management in operating systems involves organizing and maintaining the data on a storage device, such as a hard disk drive or solid-state drive. The main goal of disk management is to efficiently utilize the available storage space and ensure data integrity and security.

Some common disk management techniques used in operating systems include:

  1. Partitioning: This involves dividing a single physical disk into multiple logical partitions. Each partition can be treated as a separate storage device, allowing for better organization and management of data.
  2. Formatting: This involves preparing a disk for use by creating a file system on it. This process typically erases all existing data on the disk.
  3. File system management: This involves managing the file systems used by the operating system to store and access data on the disk. Different file systems have different features and performance characteristics.
  4. Disk space allocation: This involves allocating space on the disk for storing files and directories. Some common methods of allocation include contiguous allocation, linked allocation, and indexed allocation.
  5. Disk defragmentation: Over time, as files are created and deleted, the data on a disk can become fragmented, meaning that it is scattered across the disk. Disk defragmentation involves rearranging the data on the disk to improve performance.

Advantages of disk management include:

Disadvantages of disk management include:

Previous Years Questions

Q- With a diagram, discuss the steps involved in handling a page fault.

Q- Explain the contiguous and non-contiguous memory allocation.

Q- Memory paritions of 100kb, 500 kb, 200 kb, 300 kb, 600 kb are available how would best, worst, first fit algorithm to place process 212, 417, 112, 426 in order? Which is the best algorithm?

Solution:

And the given processes are:

  • Process P1 = 212 KB
  • Process P2 = 417 KB
  • Process P3 = 112 KB
  • Process P4 = 426 KB

Allocation using First Fit Algorithm:

Allocation using Best Fit Algorithm:

Allocation using Worst Fit Algorithm:

  • Best Fit Algorithm is the best algorithm because all the process get memory allocation to be executed.

Q- How many page fault occur for FIFO, LRU and Optimal page replacement algorithm with following reference string for four-page frames:
1, 2, 3, 4, 5, 3, 4, 1, 6, 7, 8, 7, 9, 7, 8, 9, 5, 4, 5, 4, 2

Q- The queue of requests in FIFO is 86, 147, 91, 177, 94, 150, 102, 175, 130. What is the total head movement needed to satisfy the requests for the following Scheduling algorithms:
FCFS, SSTF, SCAN, LOOK, C-SCAN.

Solution:
The queue of requests in FIFO is 86, 147, 91, 177, 94, 150, 102, 175, 130
let's assume the initial position of the head is at track 100

FCFS ↓

  • So, total overhead movement (total distance covered by the disk arm) = (100 - 86) + (147 - 86) + (147 - 91) + (177 - 91) + (177 - 94) + (150 - 94) + (150 - 102) + (175 - 102) + (175 - 130) = 522

SSTF ↓

  • So, total overhead movement (total distance covered by the disk arm) = (102 - 100) + (102 - 86) + (177 - 86) = 109

SCAN ↓

  • Suppose we are moving towards larger value.
  • So, total overhead movement (total distance covered by the disk arm) = (180 - 100) + (180 - 86) = 174

C-SCAN ↓

  • Suppose we are moving towards larger value.
  • So, total overhead movement (total distance covered by the disk arm) = (180-100) + (180 - 0) + (94 - 0) = 354

LOOK ↓

  • Suppose we are moving towards larger value.
  • So, total overhead movement (total distance covered by the disk arm) = (180 - 100) + (180 - 86) = 174