× back
Next Topic → ← Previous Topic

Input Output Organization

Peripheral Devices

These are nothing but input/output device which are connected to the computer. These are of 3 types

  1. Input Peripheral
  2. Output Peripheral
  3. Input-output peripheral

Input Peripheral

Input peripherals play a crucial role in providing the computer with the necessary data to process. These devices serve as the bridge between human interaction and the digital realm, allowing users to input information effectively. Examples of input peripherals include keyboards, mice, and scanners.

  • Keyboard: When we use a keyboard, we press keys, each of which corresponds to a meaningful character in the ASCII (American Standard Code for Information Interchange) set. ASCII uses 7 bits, enabling the representation of 128 characters. Among these, 94 are printable, and 34 are non-printable. Within the printable characters, 26 represent the alphabet, 10 represent digits (0-9), and 32 include upper and lower case letters along with special symbols.
  • Efficiency Considerations: Recognizing that input devices like keyboards operate at a slower pace compared to the speed of the CPU, computers implement the concept of multiprogramming. This technique ensures that the CPU remains busy and efficient, even when dealing with input devices that are inherently slower due to human operation.

Output Peripheral Devices

Output peripheral devices play a crucial role in presenting information to users in a comprehensible format. These devices are primarily designed to display data processed by the computer, offering a tangible output. Among the most commonly used output peripherals are:

  • Monitors: Monitors serve as visual displays, presenting a wide range of information, from text and images to videos. They are the primary output interface for users to interact with the digital content produced by the computer.
  • Printers: Printers, on the other hand, provide a physical representation of digital data. They convert electronic documents into tangible, printed copies, making information accessible beyond the digital realm.

Input-Output Peripheral Devices

Input-Output (I/O) peripheral devices are the versatile components that facilitate both the input and output of data between the computer and its external environment. These devices serve as bidirectional bridges, enabling communication between the user and the computer system. Examples of Input-Output peripheral devices include:

  • Touchscreen Displays: Touchscreens are multifunctional devices that allow users to input data by touch and receive visual output simultaneously. They find applications in devices like smartphones, tablets, and interactive kiosks.
  • External Hard Drives: External hard drives not only store data but also provide a means for users to input information into the computer (by transferring files to the computer) and receive output (by accessing stored data).

Input-Output Interface

Differences between CPU and IO devices

  • Peripherals operate on electromechanical and electromagnetic principles, differing significantly from the functioning of the CPU and memory. Consequently, a signal conversion is necessary to facilitate communication.
  • While CPU processing speed is considerably faster than that of IO devices, the latter are relatively slower. To synchronize these disparate speeds, an interface is employed to ensure smooth and efficient data transfer.
  • The data format for IO devices is typically in bytes, whereas the CPU executes instructions in the form of words.
  • Input-Output operations transfer data in a serial manner, whereas the CPU executes instructions in parallel, handling multiple instructions simultaneously.
  • Each peripheral device has unique operational characteristics that must be controlled to prevent interference with the operations of other peripherals. Coordination and control are essential to ensure smooth system functionality.

To address and reconcile these differences, computer systems incorporate special hardware components known as interface units. These interface units serve as intermediaries between the CPU and peripherals, managing the communication and ensuring compatibility between the diverse elements of the computing system.

I/O Bus and Interface Modules

In the diagram below, you can see the communication link between the processor and peripherals. This link is facilitated by the I/O bus, which includes data lines, address lines, and control lines. Each peripheral device is equipped with its own interface unit, and these interfaces serve crucial functions:

The I/O bus connects to all peripheral interfaces. To communicate with a specific device, the processor sends the device's address through the address lines. Each interface checks the incoming address against its own, activating the pathway between the bus lines and the designated device. If a peripheral's address doesn't match, its interface becomes inactive. The processor then sends control signals through the control lines and data through the data lines.

I/O Bus versus Memory Bus

The processor employs the I/O bus to communicate with peripherals, while the memory bus facilitates communication between the processor and memory. Both the I/O bus and memory bus share similar components, including data lines, address lines, and read/write control lines. The utilization of these buses can take different forms, and there are three main ways in which the I/O bus and memory bus can be configured:

1. Separate Buses: In this configuration, two distinct buses are employed—one dedicated to memory and the other to I/O operations. This approach ensures independent communication pathways, minimizing potential conflicts between memory and I/O processes. See the illustration below for a visual representation.

2. Common Bus with Separate Control Lines: An alternative configuration involves using a single bus for both memory and I/O operations. However, separate control lines are designated for each, allowing for independent management of memory and I/O processes. This approach optimizes resource sharing while maintaining control over the distinct functions. Refer to the diagram below for a clearer understanding.

3. Common Bus with Shared Control Lines: The third approach integrates both memory and I/O operations onto a single bus, and the control lines are shared between the two. This configuration simplifies the overall design by reducing the number of dedicated control lines. The diagram below illustrates this consolidated setup.

Understanding the distinction between I/O and memory buses is crucial for designing efficient computer architectures. The choice of bus configuration depends on factors such as system complexity, performance requirements, and the need for resource optimization. By exploring these configurations, we gain insights into how computers manage data flow between the processor, memory, and peripherals, contributing to the overall functionality and performance of computing systems.

I/O Mapping A.K.A I/O Addressing A.K.A I/O Interfacing Technique

When the CPU shares a common bus system attached to both memory and I/O devices, it needs a way to differentiate between signals related to memory and those related to I/O. This is where Isolated I/O and Memory Mapped I/O come into play, providing distinct techniques for managing communication within the system.

Understanding these I/O mapping techniques is fundamental to designing efficient and cohesive computer architectures. The choice between Isolated I/O and Memory Mapped I/O depends on factors such as system complexity, performance requirements, and the desired level of integration between memory and I/O operations.

Data Transfer (Data Transmission)

Data transfer, also known as data transmission, refers to the process of sending data from one unit to another. This fundamental aspect of computing is categorized into two main types:

1. Parallel Transmission

  • Each bit in the data has its own dedicated path, allowing the entire message to be transmitted simultaneously.
  • n bits must be transmitted through n separate wires, providing a faster data transfer rate.
  • Although faster, parallel transmission requires a significant number of wires, making it suitable for short-distance applications where speed is critical, such as between the CPU and memory (bus) or from the CPU to a printer.

2. Serial Transmission

  • Each bit of the message is sent in sequence, one at a time, through a single wire.
  • n bits are transmitted using only one wire, resulting in a slower but more cost-effective data transfer process.
  • Serial transmission is commonly employed for longer distances, such as communication between the CPU and I/O devices or between different computers.

Understanding the distinctions between parallel and serial transmission is crucial in designing efficient communication systems, and the choice between them depends on factors such as distance, cost considerations, and the speed requirements of the specific application.

Synchronous Data Transfer

  • Two units involved in data transfer share a common clock.
  • Data transfer between the sender and receiver is synchronized with the same clock pulse.
  • Synchronous data transfer is typically used between devices that operate at matching speeds.
  • Bits are continuously transmitted to maintain frequency synchronization between both units.
  • Synchronous, in this context, means occurring at the same time, thanks to the presence of a common clock.
  • This method is fast, ensuring efficient and timely data transmission.
  • However, it can be costly to implement due to the requirement for a synchronized clock system.

Asynchronous Data Transfer

  • Two units involved in data transfer operate independently and each has its private clock.
  • Data transfer between the sender and receiver is not synchronized with the same clock pulse.
  • Asynchronous data transfer is commonly used between devices that do not operate at the same speed.
  • Bits are sent only when available, and the communication lines remain idle when there is no information to be transmitted.
  • Asynchronous, in this context, means occurring at regular intervals due to the absence of a common clock.
  • While asynchronous data transfer is slower compared to synchronous, it is more economical to implement.
  • Cost-effectiveness makes it a suitable choice for scenarios where synchronization is not critical.

Understanding the characteristics of synchronous and asynchronous data transfer is crucial for designing communication systems that align with the specific requirements of different devices and applications. The choice between these methods depends on factors such as device speed compatibility, cost considerations, and the need for precise timing.

Asynchronous Data Transfer & Its Types

Asynchronous data transfer, with its reliance on control signals and handshaking methods, offers a flexible and robust solution for scenarios where the speed and timing characteristics of communicating devices may vary. The choice between strobe control and handshaking methods depends on factors such as the reliability and confirmation requirements of the data transfer process.

Strobe Control

  • Strobe control employs a single control line to time each data transfer.
  • The strobe signal may be activated by either the source or the destination unit.

Source-Initiated Strobe for Data Transfer

  • In this scenario, there are a source unit and a destination unit, along with two lines: a data bus line for data transfer and a strobe line for signaling.
  • The timing diagram illustrates that the data is loaded first, followed by the transmission of the strobe signal, indicating the completion of data transfer.
  • Example: A memory write control signal from the CPU to the memory unit. In this case, the CPU initiates the transfer by loading data onto the bus and sending the strobe signal to inform the memory unit.

Destination-Initiated Strobe for Data Transfer

  • In this configuration, there is a source unit and a destination unit. However, the destination unit requests data, initiating the process by sending the strobe signal before the actual data transfer.
  • Example: A memory read control signal from the CPU to the memory unit. Here, the memory unit requests data by sending the strobe signal to the CPU, prompting the data transfer.

Strobe control, whether source-initiated or destination-initiated, provides a method for precisely timing data transfers between units. Understanding these mechanisms is crucial in designing reliable communication systems where data synchronization and integrity are paramount.

Disadvantages of Strobe Control

  • One significant drawback of strobe control is that the source unit initiating the transfer lacks confirmation of whether the destination unit has successfully received the data item placed on the bus.
  • Conversely, when the destination unit initiates the transfer, it has no means of knowing whether the source unit has indeed placed the data on the bus as intended.
  • This lack of acknowledgment introduces a potential challenge in ensuring the reliability and success of data transfer operations.

While strobe control provides a simple and efficient means of timing data transfers, the absence of acknowledgment poses a limitation in guaranteeing the integrity of the communication process. Overcoming this limitation often requires the adoption of more sophisticated techniques, such as handshaking methods, to ensure a two-way confirmation of successful data transmission between communicating units.

  • Note: In general, communication between the CPU and I/O Interface is typically achieved using strobe control. On the other hand, when there is communication between an I/O device and the I/O interface, it is commonly done using handshaking methods. This strategic choice in communication techniques ensures an efficient and reliable data transfer process tailored to the specific requirements and interactions between the central processing unit and the input/output components.

Handshaking Method

  • The handshaking method addresses the limitations of the strobe method by introducing a second control signal that provides a reply to the unit initiating the transfer.
  • Combining strobe control with an acknowledgment signal results in a two-wire control system, enhancing the reliability of data transfer.
  • In this method, three lines connect the source unit and the destination unit: the data bus, the data valid line (indicating data initiation or transfer), and the data accepted line (providing a reply to the initiation).

Source-Initiated Handshaking

  • This configuration involves two control signals: the data valid signal, indicating when the source unit initiates data transfer, and the data accepted signal, the reply from the destination unit.
  • Sequence of operation: The data is placed on the data bus first, followed by the source unit initiating the data valid signal. After successfully receiving the data, the destination unit responds with a signal through the data accepted line to the source unit. Subsequently, the data valid signal is disabled, and the destination unit disables the data accepted signal.

Destination-Initiated Handshaking

  • In this scenario, the destination unit initiates data transfer by sending the "ready for data" signal to the source unit, indicating its readiness to accept data.
  • Following this signal, the source unit places the data on the data bus and enables the data valid signal. After enabling, the destination unit accepts the data and disables the "ready for data" signal. Finally, the source unit disables the data valid signal, returning the system to its initial state.

The handshaking method, with its two-wire control system and bidirectional communication, ensures a more robust and synchronized data transfer process. Whether initiated by the source or the destination, this method provides reliable confirmation and control signals, enhancing the overall integrity of the communication between units.

Modes of Transfer

Programmed I/O

  • Programmed I/O is utilized when there is a need to transfer data between the CPU and I/O, and the data transfer is managed through program instructions.
  • In the context of computer programming languages such as C++, Programmed I/O operations result from I/O instructions embedded in the computer program. Examples of such instructions include those for input (e.g., cin) and output (e.g., cout).
  • Each data transfer in Programmed I/O is initiated by an I/O instruction within the program, typically to access registers or memory on a specific device.
  • Executing data transfers under program control demands continuous monitoring of I/O devices by the CPU.

The diagram below illustrates the process of Programmed I/O:

Programmed I/O Diagram

In Programmed I/O, the CPU initiates a request and then remains in a program loop (polling) until the I/O device signals its readiness for data transfer. Importantly, the I/O device takes no further action to interrupt the CPU; it does not independently interrupt the CPU.

  • Disadvantages:
    • Programmed I/O can be time-consuming as it keeps the CPU unnecessarily busy. To address this issue, interrupt facilities are often employed to enhance efficiency.

Interrupt Initiated I/O

  • Interrupt Initiated I/O was introduced to address the polling-related issue present in Programmed I/O.
  • An interrupt is a high-priority signal, generated either by an external device or some software, designed to immediately capture the CPU's attention. The use of interrupts aims to eliminate the waiting period inherent in Programmed I/O.
  • In Interrupt Initiated I/O, instead of the CPU continuously monitoring, the interface is informed to issue an interrupt request signal when data becomes available from the device.
  • Meanwhile, the CPU proceeds to execute other programs while the interface keeps monitoring the device.
  • When the device is ready for data transfer, it generates an interrupt request.
  • Upon detecting the external interrupt signal, the CPU interrupts its current task, processes the I/O data transfer, and then resumes the original task it was performing.

Direct Memory Access (DMA)

  • DMA is employed when large blocks of data need to be transferred between the CPU and I/O devices, rendering Programmed or Interrupt Initiated I/O less efficient.
  • For high-speed transfers of substantial data blocks between external devices and main memory, the DMA approach is often utilized.
  • In other transfer modes, memory is accessed indirectly through the CPU. However, when transferring significant data blocks and frequently utilizing the CPU before memory access, this process becomes time-consuming. DMA addresses this by allowing direct communication between I/O devices and memory, minimizing CPU intervention.
  • DMA permits data transfer directly between the I/O device and main memory with minimal CPU involvement.
  • In DMA, the CPU grants the I/O interface the authority to read from or write to memory without direct CPU intervention.
  • The DMA controller autonomously manages data transfer between main memory and the I/O device.
  • The CPU is only involved at the beginning and end of the transfer and is interrupted only after the entire block has been successfully transferred.
DMA Diagram
  • The process involves the CPU initiating the DMA controller to transfer data between a device and main memory, allowing the CPU to proceed with other tasks.
  • The DMA controller issues a request to the relevant I/O device, manages the data transfer between the device and main memory, and waits for its completion.
  • Upon the conclusion of the data transfer, the DMA controller interrupts the CPU.

DMA Controller

  • DMA enables I/O devices to transfer data directly to or from main memory without requiring CPU intervention, effectively bypassing the CPU.
  • Between I/O devices and the CPU, there exists an interface, but between memory and I/O devices, the DMA controller (or DMA channel) acts as the intermediary, creating a channel between main memory and I/O devices. Devices such as magnetic disks, USB drives, network cards, graphics cards, and sound cards, when connected via DMA, can achieve faster data transfer rates.
  • DMA finds applications in systems utilizing multicore architectures, particularly in scenarios where intrachip data transfers are required.
  • Within the system bus, comprising address lines, data lines, and control lines connected to the CPU, memory, and DMA controller, during DMA transfers, the CPU is temporarily disabled. While the CPU typically controls the system bus, in DMA transfers, the DMA controller temporarily borrows control of the system bus from the CPU to facilitate efficient data transfer between I/O devices and memory.
DMA Controller Diagram

How DMA makes CPU to go in idle state?

  • To facilitate DMA's control over the bus system from the CPU, two signals are employed: Bus Request (BR) and Bus Grant (BG).
  • When DMA desires full control of the bus system, it initiates the process by sending a Bus Request (BR) signal through the bus request line.
  • Upon receiving the Bus Request (BR) signal, the CPU interrupts its ongoing tasks, relinquishes control of all three components—data lines, address lines, and control lines—and enters a high-impedance state. In this state, the bus behaves like an open circuit, disabling all signals and buses.
  • To signal to the DMA that control has been transferred, the CPU sends a Bus Grant (BG) signal through the bus grant line, indicating that the DMA now has authority over the buses. This communication allows the DMA to use the buses to transfer data directly to memory.
  • Essentially, the two main signals used in this process are Bus Request (BR) and Bus Grant (BG).

DMA Working

When an I/O device wishes to transfer data with main memory, the following steps outline the DMA process:

In the diagram above, we have a detailed breakdown of the components involved in the Direct Memory Access (DMA) process:

  1. DS (DMA Select): The processor sets DS = 1 to activate DMA, initiating the DMA process.
  2. RS (Register Select): The CPU uses this signal to select DMA registers for storing values, such as the starting address and the number of words to be transferred.
  3. RD (Read) & WR (Write): These signals, Iri is used for reading and writing purposes during the DMA operation.
  4. BR (Bus Request): DMA employs this line to send a request to the processor to release the BUS system, indicating its need for control over the system bus.
  5. BG (Bus Grant): When the processor relinquishes control of the bus to DMA, it sets BG = 1, signifying that the bus is now granted to the DMA controller.
  6. Interrupt: DMA uses this line to send a signal when data transfer is completed. The processor can also use this line to check whether data transfer has been successfully completed.
  7. DMA Request: I/O devices utilize this line to send a request to the DMA controller, signaling the need for data transfer.
  8. DMA Acknowledgement: The DMA controller responds to I/O devices through this line, acknowledging the receipt of the request and preparing for data transfer.
  9. Address Register: The processor stores the starting address of data in this register, providing the necessary information for the DMA controller to locate the data.
  10. Word Count Register: The processor stores the total number of words to be transferred in this register, allowing the DMA controller to determine the extent of the data transfer operation.
  11. Control Register: The processor stores control signals in this register, dictating various aspects of the DMA operation, such as the transfer mode and direction.
  12. Data Bus Buffer: This component is employed to temporarily store data during the DMA transfer process, ensuring efficient and synchronized data movement.
  13. Data Bus: DMA utilizes this bus for the actual transfer of data between memory and peripheral devices, facilitating high-speed and direct communication.

Now, let's delve into the operational sequence of the Direct Memory Access (DMA) process:

  • Initiation of Data Transfer Request: When an I/O device wishes to transfer data, it sends a request to the DMA through the DMA request line.
  • Bus Request to Processor: DMA, in response to the request, sends a bus request (BR = 1) to the processor, requesting it to release control of the bus system (utilizing the BR line).
  • Processor's Response: Upon receiving the bus request, the processor stores its current work, and then transmits essential information, such as the starting address and the number of words to be transferred, to DMA. Subsequently, the processor relinquishes control of the bus system and notifies DMA by setting the BG line to 1.
  • DMA Acknowledgement and Data Transfer: Upon receiving the BG signal, DMA activates the DMA acknowledgement line, informing the I/O device that it can commence data transfer. The DMA controller initiates the actual data transfer process.
  • Decrementing Word Count: With each data transfer, the value of the Word Count (WC) register is decremented by 1, keeping track of the progress of the data transfer operation.
  • Data Transfer Completion: When the WC register reaches 0, DMA sets BR = 0 and sends an interrupt signal to the CPU, signaling the completion of the data transfer.
  • CPU's Post-Transfer Actions: The CPU, upon receiving the interrupt, checks the WC register. Since it is now 0, the CPU sets BG = 0, reclaiming control of the bus system for its own operations.

Input Output Processor (IOP)

Data Formatting and Transfer

  • Peripheral devices often have different data formats than memory and CPU. The IOP is responsible for structuring data words to match the required formats.
  • For example, it may receive 4 bytes from an input device and pack them into one 32-bit word before transferring to memory.

CPU-IOP Communication:

  • In most systems, the CPU is the master, and the IOP is a slave processor.
  • The CPU initiates all operations, but I/O instructions are executed by the IOP.
  • Communication involves a sequence of operations:
    1. The CPU sends an instruction to test the IOP path.
    2. The IOP responds by sending its status.
    3. The CPU sends the instruction to start I/O transfer by specifying the memory address where the IOP should begin.
    4. The CPU can proceed the other tasks while the IOP handles the I/O program.
    5. After data transfer completion, the IOP sends an interrupt request to the CPU.
    6. The CPU reads the IOP status, with the IOP placing the status report into a designated memory location.

Serial Communication

Connecting Remote Terminals:

  • Remote terminals connect to a data communication processor via telephone lines or other communication facilities.
  • Conversion devices like data sets, acoustic couplers, or modems are used to convert digital signals to audio tones for transmission over telephone lines.
  • Different modulation schemes, communication media, and transmission speeds are employed.

Transmission Methods:

  • Communication lines may be connected to synchronous or asynchronous interfaces based on the remote terminal's transmission method.
  • Asynchronous transmission uses start and stop bits in each character, while synchronous transmission sends a continuous message without start-stop bits.
  • Synchronous transmission is more efficient but requires continuous messages to maintain synchronism.

Error Detection

  • Data communication processors check for transmission errors using methods like parity checking in asynchronous transmission and techniques like longitudinal redundancy check (LRC) or cyclic redundancy check (CRC) in synchronous transmission.
  • LRC checks are calculated at the end of a block, and the receiving station compares it with the transmitted LRC.

Transmission Modes:

  • Data can be transmitted in three modes: simplex (one-way communication), half-duplex (two-way, but one direction at a time), and full-duplex (simultaneous two-way communication).
  • Simplex is rarely used in data communication. Half-duplex requires a turnaround time, and full-duplex can use either a four-wire link or frequency spectrum subdivision in a two-wire circuit.

Data Link and Protocols

  • The communication lines, modems, and equipment form a data link, and orderly data transfer is governed by a protocol.
  • Data link control protocols ensure the orderly transfer of information, establish and terminate connections, identify sender and receiver, handle error-free message passing, and manage control functions.
  • Protocols are categorized into character-oriented and bit-oriented protocols based on the framing technique used.

Reference