Input Output Organization
- Welcome to the world of Input-Output Organization! This chapter is all about how your computer
communicates with the outside world, especially through devices like keyboards, mice, and printers—these
are known as peripheral devices. Think of them as the helpers that make your computer do things. Ever
wondered how your computer understands when you click your mouse or type on your keyboard? That's
exactly what we're going to explore. We'll break down the basics of Input-Output Organization,
demystifying the process of how your computer interacts with these peripheral devices. So, let's dive in
and uncover the secrets behind the scenes of how your digital world connects with the physical through
these devices!
Peripheral Devices
These are nothing but input/output device which are connected to the computer. These are of 3 types
- Input Peripheral
- Output Peripheral
- Input-output peripheral
Input Peripheral
Input peripherals play a crucial role in providing the computer with the necessary data to
process. These devices serve as the bridge between human interaction and the digital realm,
allowing users to input information effectively. Examples of input peripherals include
keyboards, mice, and scanners.
- Keyboard: When we use a keyboard, we press keys, each of which corresponds
to a meaningful character in the ASCII (American Standard Code for Information Interchange)
set. ASCII uses 7 bits, enabling the representation of 128 characters. Among these, 94 are
printable, and 34 are non-printable. Within the printable characters, 26 represent the
alphabet, 10 represent digits (0-9), and 32 include upper and lower case letters along with
special symbols.
- Efficiency Considerations: Recognizing that input devices like keyboards
operate at a slower pace compared to the speed of the CPU, computers implement the concept
of multiprogramming. This technique ensures that the CPU remains busy and efficient, even
when dealing with input devices that are inherently slower due to human operation.
Output Peripheral Devices
Output peripheral devices play a crucial role in presenting information to users in a
comprehensible format. These devices are primarily designed to display data processed by the
computer, offering a tangible output. Among the most commonly used output peripherals are:
- Monitors: Monitors serve as visual displays, presenting a wide range of
information, from text and images to videos. They are the primary output interface for users
to interact with the digital content produced by the computer.
- Printers: Printers, on the other hand, provide a physical representation of
digital data. They convert electronic documents into tangible, printed copies, making
information accessible beyond the digital realm.
Input-Output Peripheral Devices
Input-Output (I/O) peripheral devices are the versatile components that facilitate both the input
and output of data between the computer and its external environment. These devices serve as
bidirectional bridges, enabling communication between the user and the computer system. Examples
of Input-Output peripheral devices include:
- Touchscreen Displays: Touchscreens are multifunctional devices that allow
users to input data by touch and receive visual output simultaneously. They find
applications in devices like smartphones, tablets, and interactive kiosks.
- External Hard Drives: External hard drives not only store data but also
provide a means for users to input information into the computer (by transferring files to
the computer) and receive output (by accessing stored data).
Input-Output Interface
- The Input-Output (I/O) interface serves as the crucial link for transferring data between the
internal storage of a computer and external I/O devices. Establishing a communication link between
peripherals and the CPU is essential for seamless data exchange.
- In most cases, the CPU cannot directly access I/O devices due to fundamental differences between the
CPU and these devices. To overcome these disparities, an I/O interface is employed.
- By utilizing input and output interfaces, the CPU can effectively communicate with I/O devices,
bridging the gap and facilitating the exchange of information.
Differences between CPU and IO devices
- Peripherals operate on electromechanical and electromagnetic principles, differing significantly
from the functioning of the CPU and memory. Consequently, a signal conversion is necessary to
facilitate communication.
- While CPU processing speed is considerably faster than that of IO devices, the latter are
relatively slower. To synchronize these disparate speeds, an interface is employed to ensure
smooth and efficient data transfer.
- The data format for IO devices is typically in bytes, whereas the CPU executes instructions in
the form of words.
- Input-Output operations transfer data in a serial manner, whereas the CPU executes instructions
in parallel, handling multiple instructions simultaneously.
- Each peripheral device has unique operational characteristics that must be controlled to prevent
interference with the operations of other peripherals. Coordination and control are essential to
ensure smooth system functionality.
To address and reconcile these differences, computer systems incorporate special hardware components
known as interface units. These interface units serve as intermediaries between the CPU and
peripherals, managing the communication and ensuring compatibility between the diverse elements of
the computing system.
I/O Bus and Interface Modules
In the diagram below, you can see the communication link between the processor and peripherals. This
link is facilitated by the I/O bus, which includes data lines, address lines, and control lines.
Each peripheral device is equipped with its own interface unit, and these interfaces serve crucial
functions:
- Decoding: The interface unit decodes the address and control signals received
from the CPU, understanding the instructions for the peripheral.
- Management: It oversees and manages the operation of the peripheral device,
ensuring its functions as intended.
- Synchronization: Using buffers, the interface unit synchronizes the speed of
the CPU with that of the peripheral, facilitating smooth data transfer.
The I/O bus connects to all peripheral interfaces. To communicate with a specific device, the
processor sends the device's address through the address lines. Each interface checks the incoming
address against its own, activating the pathway between the bus lines and the designated device. If
a peripheral's address doesn't match, its interface becomes inactive. The processor then sends
control signals through the control lines and data through the data lines.
- Processor Commands: The processor can generate four types of commands for
peripherals:
- Control Command: Instructs the device on what action to perform and how
to execute it.
- Status Command: Checks and reports the current condition or status of
the device.
- Data Output Command: Initiates the transfer of data from the device to
the processor.
- Data Input Command: Prompts the device to take data from the data bus
and store it in its buffer.
I/O Bus versus Memory Bus
The processor employs the I/O bus to communicate with peripherals, while the memory bus facilitates
communication between the processor and memory. Both the I/O bus and memory bus share similar
components, including data lines, address lines, and read/write control lines. The utilization of
these buses can take different forms, and there are three main ways in which the I/O bus and memory
bus can be configured:
1. Separate Buses: In this configuration, two distinct buses are employed—one
dedicated to memory and the other to I/O operations. This approach ensures independent communication
pathways, minimizing potential conflicts between memory and I/O processes. See the illustration
below for a visual representation.
2. Common Bus with Separate Control Lines: An alternative configuration involves
using a single bus for both memory and I/O operations. However, separate control lines are
designated for each, allowing for independent management of memory and I/O processes. This approach
optimizes resource sharing while maintaining control over the distinct functions. Refer to the
diagram below for a clearer understanding.
3. Common Bus with Shared Control Lines: The third approach integrates both memory
and I/O operations onto a single bus, and the control lines are shared between the two. This
configuration simplifies the overall design by reducing the number of dedicated control lines. The
diagram below illustrates this consolidated setup.
Understanding the distinction between I/O and memory buses is crucial for designing efficient
computer architectures. The choice of bus configuration depends on factors such as system
complexity, performance requirements, and the need for resource optimization. By exploring these
configurations, we gain insights into how computers manage data flow between the processor, memory,
and peripherals, contributing to the overall functionality and performance of computing systems.
I/O Mapping A.K.A I/O Addressing A.K.A I/O Interfacing Technique
When the CPU shares a common bus system attached to both memory and I/O devices, it needs a way to
differentiate between signals related to memory and those related to I/O. This is where Isolated I/O
and Memory Mapped I/O come into play, providing distinct techniques for managing communication
within the system.
-
Isolated I/O: In this method, separate control lines are designated for
memory and I/O operations. The CPU uses these control lines to efficiently manage the flow
of information, ensuring that data reaches the intended destination without interference.
-
Memory Mapped I/O: Unlike Isolated I/O, this technique employs a common
control line for both I/O and memory. The CPU relies on the specific addresses to determine
whether the operation involves regular memory or an I/O device. This simplifies the system's
design and programming, treating I/O devices as integral parts of the memory space.
Understanding these I/O mapping techniques is fundamental to designing efficient and cohesive
computer architectures. The choice between Isolated I/O and Memory Mapped I/O depends on factors
such as system complexity, performance requirements, and the desired level of integration between
memory and I/O operations.
Data Transfer (Data Transmission)
Data transfer, also known as data transmission, refers to the process of sending data from one unit to
another. This fundamental aspect of computing is categorized into two main types:
1. Parallel Transmission
- Each bit in the data has its own dedicated path, allowing the entire message to be transmitted
simultaneously.
- n bits must be transmitted through n separate wires, providing a faster data transfer rate.
- Although faster, parallel transmission requires a significant number of wires, making it
suitable for short-distance applications where speed is critical, such as between the CPU and
memory (bus) or from the CPU to a printer.
2. Serial Transmission
- Each bit of the message is sent in sequence, one at a time, through a single wire.
- n bits are transmitted using only one wire, resulting in a slower but more cost-effective data
transfer process.
- Serial transmission is commonly employed for longer distances, such as communication between the
CPU and I/O devices or between different computers.
Understanding the distinctions between parallel and serial transmission is crucial in designing efficient
communication systems, and the choice between them depends on factors such as distance, cost
considerations, and the speed requirements of the specific application.
Synchronous Data Transfer
- Two units involved in data transfer share a common clock.
- Data transfer between the sender and receiver is synchronized with the same clock pulse.
- Synchronous data transfer is typically used between devices that operate at matching speeds.
- Bits are continuously transmitted to maintain frequency synchronization between both units.
- Synchronous, in this context, means occurring at the same time, thanks to the presence of a
common clock.
- This method is fast, ensuring efficient and timely data transmission.
- However, it can be costly to implement due to the requirement for a synchronized clock system.
Asynchronous Data Transfer
- Two units involved in data transfer operate independently and each has its private clock.
- Data transfer between the sender and receiver is not synchronized with the same clock pulse.
- Asynchronous data transfer is commonly used between devices that do not operate at the same
speed.
- Bits are sent only when available, and the communication lines remain idle when there is no
information to be transmitted.
- Asynchronous, in this context, means occurring at regular intervals due to the absence of a
common clock.
- While asynchronous data transfer is slower compared to synchronous, it is more economical to
implement.
- Cost-effectiveness makes it a suitable choice for scenarios where synchronization is not
critical.
Understanding the characteristics of synchronous and asynchronous data transfer is crucial for designing
communication systems that align with the specific requirements of different devices and applications.
The choice between these methods depends on factors such as device speed compatibility, cost
considerations, and the need for precise timing.
Asynchronous Data Transfer & Its Types
- Asynchronous data transfer is employed when the speed of I/O devices does not align with the
processor, and the timing characteristics of the I/O device are unpredictable.
- Definition: Asynchronous data transfer between two independent communicating units
requires the transmission of control signals. These signals indicate the time at which data is being
transmitted and ensure proper coordination between the communicating units.
- Asynchronous data transfer is implemented using two distinct methods:
-
Strobe Control: In this method, the sender informs the receiver of an impending
data transfer by sending a strobe pulse. However, a challenge with this approach is that it
merely indicates that data is on the way, without confirming whether the receiver has
successfully received it. To address this limitation, a newer method called the handshaking
method was introduced.
-
Handshaking Method: In this method, when the sender intends to send data to the
receiver, it first notifies the receiver of the upcoming data transfer. In response, the
receiver sends an acknowledgment signal, confirming its readiness to receive the data. This
two-way confirmation ensures a more reliable and secure data transfer process compared to the
strobe control method.
Asynchronous data transfer, with its reliance on control signals and handshaking methods, offers a
flexible and robust solution for scenarios where the speed and timing characteristics of communicating
devices may vary. The choice between strobe control and handshaking methods depends on factors such as
the reliability and confirmation requirements of the data transfer process.
Strobe Control
- Strobe control employs a single control line to time each data transfer.
- The strobe signal may be activated by either the source or the destination unit.
Source-Initiated Strobe for Data Transfer
- In this scenario, there are a source unit and a destination unit, along with two lines: a
data bus line for data transfer and a strobe line for signaling.
- The timing diagram illustrates that the data is loaded first, followed by the
transmission of the strobe signal, indicating the completion of data transfer.
- Example: A memory write control signal from the CPU to the memory unit. In
this case, the CPU initiates the transfer by loading data onto the bus and sending the
strobe signal to inform the memory unit.
Destination-Initiated Strobe for Data Transfer
- In this configuration, there is a source unit and a destination unit. However, the
destination unit requests data, initiating the process by sending the strobe signal before
the actual data transfer.
- Example: A memory read control signal from the CPU to the memory unit.
Here, the memory unit requests data by sending the strobe signal to the CPU, prompting the
data transfer.
Strobe control, whether source-initiated or destination-initiated, provides a method for precisely
timing data transfers between units. Understanding these mechanisms is crucial in designing reliable
communication systems where data synchronization and integrity are paramount.
Disadvantages of Strobe Control
- One significant drawback of strobe control is that the source unit initiating the transfer
lacks confirmation of whether the destination unit has successfully received the data item
placed on the bus.
- Conversely, when the destination unit initiates the transfer, it has no means of knowing
whether the source unit has indeed placed the data on the bus as intended.
- This lack of acknowledgment introduces a potential challenge in ensuring the reliability and
success of data transfer operations.
While strobe control provides a simple and efficient means of timing data transfers, the absence of
acknowledgment poses a limitation in guaranteeing the integrity of the communication process.
Overcoming this limitation often requires the adoption of more sophisticated techniques, such as
handshaking methods, to ensure a two-way confirmation of successful data transmission between
communicating units.
- Note: In general, communication between the CPU and I/O Interface is typically achieved using
strobe control. On the other hand, when there is communication between an I/O device and the I/O
interface, it is commonly done using handshaking methods. This strategic choice in communication
techniques ensures an efficient and reliable data transfer process tailored to the specific
requirements and interactions between the central processing unit and the input/output
components.
Handshaking Method
- The handshaking method addresses the limitations of the strobe method by introducing a second
control signal that provides a reply to the unit initiating the transfer.
- Combining strobe control with an acknowledgment signal results in a two-wire control system,
enhancing the reliability of data transfer.
- In this method, three lines connect the source unit and the destination unit: the data bus, the
data valid line (indicating data initiation or transfer), and the data accepted line (providing
a reply to the initiation).
Source-Initiated Handshaking
- This configuration involves two control signals: the data valid signal, indicating when the
source unit initiates data transfer, and the data accepted signal, the reply from the
destination unit.
- Sequence of operation: The data is placed on the data bus first, followed by the source unit
initiating the data valid signal. After successfully receiving the data, the destination
unit responds with a signal through the data accepted line to the source unit. Subsequently,
the data valid signal is disabled, and the destination unit disables the data accepted
signal.
Destination-Initiated Handshaking
- In this scenario, the destination unit initiates data transfer by sending the "ready for
data" signal to the source unit, indicating its readiness to accept data.
- Following this signal, the source unit places the data on the data bus and enables the data
valid signal. After enabling, the destination unit accepts the data and disables the "ready
for data" signal. Finally, the source unit disables the data valid signal, returning the
system to its initial state.
The handshaking method, with its two-wire control system and bidirectional communication, ensures a
more robust and synchronized data transfer process. Whether initiated by the source or the
destination, this method provides reliable confirmation and control signals, enhancing the overall
integrity of the communication between units.
Modes of Transfer
- Before delving into the modes of transfer, it's essential to understand that when data is
transferred between the CPU and I/O devices, the CPU primarily handles execution and stores some
data in its registers. However, the actual data transfer to I/O devices is facilitated through the
assistance of memory. Therefore, memory plays a crucial role, serving as an intermediate step when
performing data transfers from I/O devices. In this process, the CPU acts as an intermediary.
- Various modes exist for transferring data between the central computer and I/O devices, and
understanding these modes is integral to comprehending the intricacies of data transfer.
- There are three primary modes through which data is transferred from peripheral devices to the CPU:
- Programmed I/O: In this mode, the CPU serves as an intermediate link in the
data transfer process.
- Interrupt-Initiated I/O: Similar to Programmed I/O, this mode involves the CPU
as an intermediary in the data transfer.
- Direct Memory Access (DMA): In DMA mode, the CPU is not an intermediate
participant, allowing for more efficient data transfer between peripheral devices and memory.
- While Programmed I/O and Interrupt-Initiated I/O involve the CPU as an intermediate step, Direct
Memory Access (DMA) bypasses the CPU, streamlining the data transfer process.
Programmed I/O
- Programmed I/O is utilized when there is a need to transfer data between the CPU and I/O, and
the data transfer is managed through program instructions.
- In the context of computer programming languages such as C++, Programmed I/O operations result
from I/O instructions embedded in the computer program. Examples of such instructions include
those for input (e.g., cin) and output (e.g., cout).
- Each data transfer in Programmed I/O is initiated by an I/O instruction within the program,
typically to access registers or memory on a specific device.
- Executing data transfers under program control demands continuous monitoring of I/O devices by
the CPU.
The diagram below illustrates the process of Programmed I/O:
In Programmed I/O, the CPU initiates a request and then remains in a program loop (polling) until the
I/O device signals its readiness for data transfer. Importantly, the I/O device takes no further
action to interrupt the CPU; it does not independently interrupt the CPU.
- Disadvantages:
- Programmed I/O can be time-consuming as it keeps the CPU unnecessarily busy. To address
this issue, interrupt facilities are often employed to enhance efficiency.
Interrupt Initiated I/O
- Interrupt Initiated I/O was introduced to address the polling-related issue present in
Programmed I/O.
- An interrupt is a high-priority signal, generated either by an external device or some software,
designed to immediately capture the CPU's attention. The use of interrupts aims to eliminate the
waiting period inherent in Programmed I/O.
- In Interrupt Initiated I/O, instead of the CPU continuously monitoring, the interface is
informed to issue an interrupt request signal when data becomes available from the device.
- Meanwhile, the CPU proceeds to execute other programs while the interface keeps monitoring the
device.
- When the device is ready for data transfer, it generates an interrupt request.
- Upon detecting the external interrupt signal, the CPU interrupts its current task, processes the
I/O data transfer, and then resumes the original task it was performing.
Direct Memory Access (DMA)
- DMA is employed when large blocks of data need to be transferred between the CPU and I/O
devices, rendering Programmed or Interrupt Initiated I/O less efficient.
- For high-speed transfers of substantial data blocks between external devices and main memory,
the DMA approach is often utilized.
- In other transfer modes, memory is accessed indirectly through the CPU. However, when
transferring significant data blocks and frequently utilizing the CPU before memory access, this
process becomes time-consuming. DMA addresses this by allowing direct communication between I/O
devices and memory, minimizing CPU intervention.
- DMA permits data transfer directly between the I/O device and main memory with minimal CPU
involvement.
- In DMA, the CPU grants the I/O interface the authority to read from or write to memory without
direct CPU intervention.
- The DMA controller autonomously manages data transfer between main memory and the I/O device.
- The CPU is only involved at the beginning and end of the transfer and is interrupted only after
the entire block has been successfully transferred.
- The process involves the CPU initiating the DMA controller to transfer data between a device and
main memory, allowing the CPU to proceed with other tasks.
- The DMA controller issues a request to the relevant I/O device, manages the data transfer
between the device and main memory, and waits for its completion.
- Upon the conclusion of the data transfer, the DMA controller interrupts the CPU.
DMA Controller
- DMA enables I/O devices to transfer data directly to or from main memory without requiring CPU
intervention, effectively bypassing the CPU.
- Between I/O devices and the CPU, there exists an interface, but between memory and I/O devices,
the DMA controller (or DMA channel) acts as the intermediary, creating a channel between main
memory and I/O devices. Devices such as magnetic disks, USB drives, network cards, graphics
cards, and sound cards, when connected via DMA, can achieve faster data transfer rates.
- DMA finds applications in systems utilizing multicore architectures, particularly in scenarios
where intrachip data transfers are required.
- Within the system bus, comprising address lines, data lines, and control lines connected to the
CPU, memory, and DMA controller, during DMA transfers, the CPU is temporarily disabled. While
the CPU typically controls the system bus, in DMA transfers, the DMA controller temporarily
borrows control of the system bus from the CPU to facilitate efficient data transfer between I/O
devices and memory.
How DMA makes CPU to go in idle state?
- To facilitate DMA's control over the bus system from the CPU, two signals are employed: Bus
Request (BR) and Bus Grant (BG).
- When DMA desires full control of the bus system, it initiates the process by sending a Bus
Request (BR) signal through the bus request line.
- Upon receiving the Bus Request (BR) signal, the CPU interrupts its ongoing tasks, relinquishes
control of all three components—data lines, address lines, and control lines—and enters a
high-impedance state. In this state, the bus behaves like an open circuit, disabling all signals
and buses.
- To signal to the DMA that control has been transferred, the CPU sends a Bus Grant (BG) signal
through the bus grant line, indicating that the DMA now has authority over the buses. This
communication allows the DMA to use the buses to transfer data directly to memory.
- Essentially, the two main signals used in this process are Bus Request (BR) and Bus Grant (BG).
DMA Working
When an I/O device wishes to transfer data with main memory, the following steps
outline
the DMA
process:
In the diagram above, we have a detailed breakdown of the components involved in the Direct Memory
Access (DMA) process:
- DS (DMA Select): The processor sets DS = 1 to activate DMA, initiating the DMA
process.
- RS (Register Select): The CPU uses this signal to select DMA registers for
storing values, such as the starting address and the number of words to be transferred.
- RD (Read) & WR (Write): These signals, Iri is used for reading and writing
purposes during the DMA operation.
- BR (Bus Request): DMA employs this line to send a request to the processor to
release the BUS system, indicating its need for control over the system bus.
- BG (Bus Grant): When the processor relinquishes control of the bus to DMA, it
sets BG = 1, signifying that the bus is now granted to the DMA controller.
- Interrupt: DMA uses this line to send a signal when data transfer is completed.
The processor can also use this line to check whether data transfer has been successfully
completed.
- DMA Request: I/O devices utilize this line to send a request to the DMA
controller, signaling the need for data transfer.
- DMA Acknowledgement: The DMA controller responds to I/O devices through this
line, acknowledging the receipt of the request and preparing for data transfer.
- Address Register: The processor stores the starting address of data in this
register, providing the necessary information for the DMA controller to locate the data.
- Word Count Register: The processor stores the total number of words to be
transferred in this register, allowing the DMA controller to determine the extent of the data
transfer operation.
- Control Register: The processor stores control signals in this register,
dictating various aspects of the DMA operation, such as the transfer mode and direction.
- Data Bus Buffer: This component is employed to temporarily store data during
the DMA transfer process, ensuring efficient and synchronized data movement.
- Data Bus: DMA utilizes this bus for the actual transfer of data between memory
and peripheral devices, facilitating high-speed and direct communication.
Now, let's delve into the operational sequence of the Direct Memory Access (DMA) process:
- Initiation of Data Transfer Request: When an I/O device wishes to transfer
data, it sends a request to the DMA through the DMA request line.
- Bus Request to Processor: DMA, in response to the request, sends a bus request
(BR = 1) to the processor, requesting it to release control of the bus system (utilizing the BR
line).
- Processor's Response: Upon receiving the bus request, the processor stores its
current work, and then transmits essential information, such as the starting address and the
number of words to be transferred, to DMA. Subsequently, the processor relinquishes control of
the bus system and notifies DMA by setting the BG line to 1.
- DMA Acknowledgement and Data Transfer: Upon receiving the BG signal, DMA
activates the DMA acknowledgement line, informing the I/O device that it can commence data
transfer. The DMA controller initiates the actual data transfer process.
- Decrementing Word Count: With each data transfer, the value of the Word Count
(WC) register is decremented by 1, keeping track of the progress of the data transfer operation.
- Data Transfer Completion: When the WC register reaches 0, DMA sets BR = 0 and
sends an interrupt signal to the CPU, signaling the completion of the data transfer.
- CPU's Post-Transfer Actions: The CPU, upon receiving the interrupt, checks the
WC register. Since it is now 0, the CPU sets BG = 0, reclaiming control of the bus system for
its own operations.
Input Output Processor (IOP)
- The IOP is an external processor designed to handle input and output tasks, communicating directly
with all I/O devices.
- It operates independently of the CPU and relieves the CPU of input-output responsibilities.
- Similar to a CPU, the IOP can fetch and execute its own instructions, perform arithmetic, logic,
branching, and code translation.
- The IOP provides a data transfer path between peripheral devices and the memory unit.
- Unlike DMA controllers, the IOP can fetch and execute instructions, making it versatile in handling
various processing tasks.
Data Formatting and Transfer
- Peripheral devices often have different data formats than memory and CPU. The IOP is
responsible for structuring data words to match the required formats.
- For example, it may receive 4 bytes from an input device and pack them into one 32-bit word
before transferring to memory.
CPU-IOP Communication:
- In most systems, the CPU is the master, and the IOP is a slave processor.
- The CPU initiates all operations, but I/O instructions are executed by the IOP.
- Communication involves a sequence of operations:
- The CPU sends an instruction to test the IOP path.
- The IOP responds by sending its status.
- The CPU sends the instruction to start I/O transfer by specifying the memory address
where the IOP should begin.
- The CPU can proceed the other tasks while the IOP handles the I/O program.
- After data transfer completion, the IOP sends an interrupt request to the CPU.
- The CPU reads the IOP status, with the IOP placing the status report into a designated
memory location.
Serial Communication
- A data communication processor is an I/O processor designed for communication with remote terminals
through data communication networks.
- It handles tasks like distributing and collecting data from various devices connected through
communication lines.
- The computer appears to serve many users simultaneously in a time-sharing environment by
interspersing fragments of each network demand efficiently.
- Unlike I/O processors communicating through a common bus, data communication processors communicate
with each terminal through a single pair of wires.
- Data and control information are transferred serially, resulting in a slower transfer rate compared
to common bus communication.
- The data communication processor communicates with CPU and memory similar to any I/O processor.
Connecting Remote Terminals:
- Remote terminals connect to a data communication processor via telephone lines or other
communication facilities.
- Conversion devices like data sets, acoustic couplers, or modems are used to convert digital
signals to audio tones for transmission over telephone lines.
- Different modulation schemes, communication media, and transmission speeds are employed.
Transmission Methods:
- Communication lines may be connected to synchronous or asynchronous interfaces based on the
remote terminal's transmission method.
- Asynchronous transmission uses start and stop bits in each character, while synchronous
transmission sends a continuous message without start-stop bits.
- Synchronous transmission is more efficient but requires continuous messages to maintain
synchronism.
Error Detection
- Data communication processors check for transmission errors using methods like parity checking
in asynchronous transmission and techniques like longitudinal redundancy check (LRC) or cyclic
redundancy check (CRC) in synchronous transmission.
- LRC checks are calculated at the end of a block, and the receiving station compares it with the
transmitted LRC.
Transmission Modes:
- Data can be transmitted in three modes: simplex (one-way communication), half-duplex (two-way,
but one direction at a time), and full-duplex (simultaneous two-way communication).
- Simplex is rarely used in data communication. Half-duplex requires a turnaround time, and
full-duplex can use either a four-wire link or frequency spectrum subdivision in a two-wire
circuit.
Data Link and Protocols
- The communication lines, modems, and equipment form a data link, and orderly data transfer is
governed by a protocol.
- Data link control protocols ensure the orderly transfer of information, establish and terminate
connections, identify sender and receiver, handle error-free message passing, and manage control
functions.
- Protocols are categorized into character-oriented and bit-oriented protocols based on the
framing technique used.