In the coding unit of the Software Engineering course, we're diving into the world of programming! The
main goal is to help you become a pro at writing code. We'll start by learning how to organize our code
from the big picture (Top-Down) and the small details (Bottom-Up). This helps us write code that's easy
to understand. We'll also talk about Structured Programming, which is like a set of rules for making
your code neat and tidy. Another cool thing we'll learn is Information Hiding, which is like keeping
secrets in your code to make it work better. Then, we'll move on to Testing, which is like making sure
our code works properly. We'll cover different types of testing and how to fix any mistakes (bugs) we
find. Lastly, we'll talk about Software Maintenance, which is like taking care of our code and making it
better over time. So, the motivation behind learning all this is to help you become a great coder and be
ready for the real-world challenges of building software!
Top-Down and Bottom-Up Programming
Top-Down Programming: This approach involves starting with the big picture and
breaking it down into smaller, manageable parts. Think of it like planning a trip: you decide on the
overall destination first and then plan the details like transportation and accommodation. In
coding, you first create a high-level design and then break it down into smaller functions or
modules.
Bottom-Up Programming: In contrast, Bottom-Up Programming begins with small,
individual components and gradually builds them up into a complete system. It's like building a
puzzle from the pieces. In coding, you start by creating small, functional units and then combine
them to form a more complex system.
Both approaches have their advantages. Top-Down helps in understanding the overall structure first,
while Bottom-Up allows for focusing on individual parts and ensuring they work correctly before
integrating them into the larger system.
Structured Programming
Structured programming emphasizes writing programs in a way that is easy to understand.
A program's structure consists of both static and dynamic elements. The static structure
refers
to the linear organization of statements in the code, while the dynamic structure is the
sequence of statements executed during runtime.
Program correctness is about ensuring that the program, when executed, behaves as intended.
To
verify correctness, we analyze the static structure (code) to understand the dynamic
behavior of
the program.
The main objective of structured programming is to align the static and dynamic structures,
ensuring that the executed sequence matches the code sequence. This promotes a linear flow
of
control in programs.
Structured constructs, like selection and iteration, are employed to maintain linear flow
even
in the presence of branching or repetition. Structured statements have a single entry and a
single exit, contributing to a clear program logic. Commonly used constructs include:
Selection:
if B then S1 else S2
if B then S1
Iteration:
While B
do S
repeat S
until B
The primary goal is to simplify the program logic for better understanding. While no
universal
rule fits all scenarios, structured programming provides valuable guidelines.
Structured programming generally results in programs that are easier to comprehend compared
to
unstructured ones.
However, it's essential to note that structured programming is not an end in itself. The
ultimate objective is to create programs that are easy to understand. Some programming
practices
may still use unstructured constructions (e.g., break statements, continue statements) when
appropriate.
Information Hiding
Information hiding plays a crucial role in reducing the coupling between modules, leading to a more
maintainable system. Coupling refers to the degree of dependence between different modules or
components in a software system.
By employing information hiding, developers can encapsulate the internal details of a module,
allowing it to interact with other modules through well-defined interfaces while keeping the
internal workings hidden. This abstraction of details enhances modularity and reduces the impact of
changes in one module on the rest of the system.
Information hiding is a powerful technique for managing the complexity of software development. In
many older programming languages like Pascal, C, and Fortran, there may be a lack of built-in
mechanisms to support data abstraction. In such languages, information hiding relies heavily on the
disciplined use of the language by the programmer. Access restrictions must be imposed manually
since the language itself may not provide them.
Benefits of Information Hiding:
Reduced Coupling: Information hiding minimizes the interdependence between modules,
making it easier to modify one module without affecting others. This enhances system flexibility and
maintainability.
Enhanced Abstraction: By concealing internal details, information hiding promotes a
clear distinction between a module's interface and its implementation. This abstraction simplifies
the understanding and usage of the module.
Improved Security: Hiding implementation details adds a layer of security by
restricting direct access to internal workings, preventing unintended interference or misuse.
Facilitates Change: Modules with well-defined interfaces are easier to modify or
replace, as long as the external contract remains unchanged. This facilitates system evolution and
adaptation to new requirements.
Programming Style
When it comes to programming style, adhering to certain rules not only enhances the readability of your
code but also helps in avoiding common errors. Let's delve into some essential principles:
Control Constructs: It is advisable to use single-entry, single-exit constructs as
much as possible. Opt for a limited set of standard control constructs to maintain a consistent and
easily understandable code structure.
Gotos: The use of gotos should be approached with caution, employing them only when
the alternative becomes more complex. Ideally, it's best to minimize their usage in favor of more
structured control flow.
Information Hiding: Encouraging information hiding where applicable allows for
better organization of code, ensuring that the internal details are shielded for a clearer
understanding of the overall structure.
User-Defined Types: Providing users the ability to define types, such as enumerated
types, proves beneficial, especially in scenarios with deeply nested if-then-else constructs,
contributing to improved code comprehension.
Module Size: While there's no strict rule regarding module sizes, the focus should
be on achieving cohesion and managing coupling effectively to promote modular and maintainable code.
Module Interface: Modules with interfaces having more than five parameters warrant
careful examination, and simplification should be considered where possible to maintain a concise
and comprehensible codebase.
Side Effects: Minimizing side effects when invoking modules is crucial. If a module
does have side effects, proper documentation is essential to communicate these effects clearly.
Robustness: A robust program not only anticipates but gracefully handles
exceptional conditions, ensuring that meaningful messages are produced instead of crashing abruptly
in the face of unexpected situations.
Switch Case with Default: It's good practice to include a default case in a
"switch" statement to avoid unpredictable behavior that might lead to bugs, such as NULL dereference
or memory leaks.
Empty Catch Block: When catching exceptions, taking some default action is
advisable to prevent scenarios where critical operations are inadvertently omitted.
Trusted Data Sources: Performing counter checks before accessing input data,
particularly if it's sourced from users or obtained over the network, adds an extra layer of
reliability to your code.
Give Importance to Exceptions: Exceptional cases, often overlooked, can be a
significant source of system failures. Prioritizing suitable exception handlers for all
possibilities contributes to building more reliable software systems.
Internal Documentation
In the realm of software engineering, documentation extends to all written materials pertaining to the
development and usage of a software product. The primary objective of effective documentation is to
foster alignment among developers and stakeholders, ensuring a shared path toward achieving project
objectives. Software documentation neatly falls into two main categories:
Product Documentation
System Documentation: Serving as a window into the system, this documentation
provides an overview, aiding engineers and stakeholders in comprehending the underlying
technology.
It comprises essentials like the requirements document, architecture design, source code,
validation
docs, verification and testing info, along with a maintenance/help guide.
(i) Requirements Document: This cornerstone document unfolds the
intricacies of
system functionality, encapsulating business rules, user stories, and use cases, providing a
blueprint for development.
(ii) Design & Architecture Document: Delving into the architectural
decisions,
it encompasses a design document template, architecture & design principles, user story
description, solution details, and diagrammatic representation of the solution, facilitating
a
comprehensive understanding.
(iii) Source Code Document: Housing the actual source code, this component
caters primarily to software engineers, providing the raw material that shapes the software
product.
(iv) Quality Assurance Documentation: This segment incorporates various
testing
documents, including the strategic approach (test strategy), a snapshot of what's to be
tested
at a given time (test plan), detailed actions for verifying features (test case
specifications),
and a checklist to keep track of completed tests.
(v) Maintenance and Help Guide: This crucial document goes beyond
problem-solving, offering insights into known issues and their solutions. It also outlines
dependencies within different parts of the system, providing a guide for maintenance.
User Documentation: Tailored for end-users and system administrators, this
branch
elucidates how the software addresses their specific needs. It encompasses user-oriented
resources
like tutorials, FAQs, video tutorials, embedded assistance, and support portals. Additionally,
it
provides technical guides for system administrators, covering installation, updates, functional
descriptions, and system admin guides.
Process Documentation
Process documentation takes us behind the scenes, covering all activities related to product
development.
It involves some upfront planning and ongoing paperwork. Common types include:
Plans, Estimates, and Schedules: Crafted before the project's initiation, these
documents lay the groundwork for the entire development journey.
Reports and Metrics: Generated on a regular basis, these reports provide
insights
into how time and human resources are utilized during development, offering a snapshot of
progress
and potential areas for improvement.
Working Papers: Essential for recording engineers' ideas and thoughts during
project implementation, these documents serve as a dynamic record of the development process.
Standards: This section outlines all coding and user experience (UX) standards
adhered to throughout the project's progression, ensuring consistency and quality in the
development
process.
Testing
Software testing is a crucial process with the following key aspects:
Identification of Correctness: The primary goal is to verify the correctness of
software by assessing various attributes such as reliability, scalability, portability,
re-usability, and usability.
Evaluation of Software Execution: Testing involves a comprehensive evaluation of
the software's execution to identify and address any errors or bugs that may impact its
functionality.
Attributes Considered in Testing: The testing process takes into account multiple
attributes, including reliability, scalability, portability, re-usability, and usability, ensuring a
holistic assessment of the software.
Identification of Errors: In a software development project, errors can be
introduced at various stages. Testing acts as a crucial checkpoint to detect and rectify any errors
that may persist from previous phases of development.
Testing Principles
Testing principles, as suggested by Davis, along with additional insights from Everett and Meyer,
guide effective testing practices:
Principle 1: All tests should be traceable to customer requirements, ensuring
that testing uncovers errors from the customer's perspective.
Principle 2: Tests should be meticulously planned well in advance, initiating
the planning process as soon as the requirements model is complete. Detailed test case
definitions can commence post-design completion, allowing for comprehensive planning and design
before any code generation begins.
Principle 3: Applying the Pareto principle to software testing, roughly 80% of
consequences come from 20% of causes. Testers should identify and thoroughly test these critical
components.
Principle 4: Testing should initiate "in the small" by assessing individual
components before progressing to "in the large," focusing on uncovering errors in integrated
components and, ultimately, the entire system.
Principle 5: Acknowledging the impossibility of exhaustive testing, the
approach should prioritize critical paths, recognizing that testing every possible combination
is not feasible.
Principle 6: Efforts for various module testing should align with the
expectation of finding errors, ensuring a targeted and effective testing approach.
Principle 7: Recognizing the significance of static testing techniques, over
85% of software defects originate in software documentation. This includes requirements,
specifications, code walk-throughs, and user manuals.
Principle 8: Tracking defects and identifying patterns in defects uncovered
during testing allows for proactive problem-solving and continuous improvement.
Principle 9: Inclusion of test cases that demonstrate correct software behavior
is essential, ensuring that the software aligns with expected functionality.
Levels of Testing
In the realm of software testing, four distinct levels of testing provide a structured approach to
assess software quality:
Level 1: Unit Testing
A unit represents an individual function within the application, serving as the smallest testable
part of the software. Unit testing involves analyzing each unit or component independently. As
the initial level of functional testing, its primary goal is to validate individual unit
components.
Level 2: Integration Testing
Integration testing focuses on testing the data flow between different modules. It comes into
play after each component or module has been independently validated (Level 1). This level aims
to check the data flow between dependent modules, ensuring a smooth integration. Integration
testing commences once functional testing has been successfully completed.
Level 3: System Testing
System testing evaluates the entire working of the software against specified requirements. It
encompasses testing both the functional and non-functional aspects of the software to ensure its
overall compliance with requirements.
Level 4: Acceptance Testing
Acceptance testing, also known as User Acceptance Testing (UAT), is the final level aimed at
assessing whether the software meets specified specifications and requirements. Conducted by the
customer before accepting the final product, UAT is typically performed by domain experts to
ensure the application aligns with business and real-time scenarios, providing customer
satisfaction.
Test Plan
A test plan is a comprehensive document detailing the areas and activities of software testing. It
provides an overview of the test strategy, objectives, schedule, required resources (human, software,
and hardware), estimation, and deliverables. The testing manager fully monitors and controls the test
plan, which is prepared collaboratively by the Test Lead (60%), Test Manager (20%), and the test
engineer (20%).
Types of Test Plan
Master Test Plan: This plan encompasses multiple testing levels and includes a
complete test strategy.
Phase Test Plan: Focused on a specific testing phase, addressing aspects like
tools
and test cases.
Specific Test Plans: Tailored for major types of testing such as security,
load,
and performance testing, emphasizing non-functional testing.
Test Plan Components or Attributes
Objectives: Describes the aim of the application, including modules, features,
and
test data.
Scope: Outlines what needs rigorous testing (In scope) and what doesn't (Out
scope).
Test Methodology: Defines different testing types, like functional and
integration
testing.
Approach: Describes the flow of the application during testing for current and
future reference.
Assumption: Contains information about potential issues during testing.
Risk: Identifies challenges and potential risks during the testing process.
Mitigation Plan or Contingency Plan: Outlines backup plans to overcome risks or
issues.
Role & Responsibility: Defines the roles and responsibilities of the testing
team
members.
Schedule: Specifies timing and deadlines for each testing activity.
Defect Tracking: Discusses how defects are tracked, communicated, and their
priorities.
Test Environments: Details the software and hardware configurations used for
testing.
Entry and Exit Criteria: Specifies conditions for starting and stopping the
testing
process.
Test Automation: Decides which features to automate, the automation tool, and
framework to be used.
Effort Estimation: Plans the effort required from each team member.
Test Deliverable: Lists the documents handed over to the customer, including
test
plan, test cases, scripts, etc.
Template: Provides templates for consistent document use during testing, such
as
test cases and bug reports.
Importance of Test Plan
The test plan serves as a rulebook, guiding thinking and ensuring adherence to a predefined
strategy.
It helps determine the necessary efforts for validating the quality of the software application
under test.
Key stakeholders outside the testing team, such as business managers and customers, can
understand
test details through the plan.
Important aspects like test schedule, strategy, and scope documented in the test plan are
valuable
for review and reuse in similar projects.
Test Case Specification
A test case is a set of conditions used by a tester to determine whether software is functioning
according to customer requirements. The design of a test case includes preconditions, case name,
input
conditions, and expected results. Test cases are detailed documents containing all possible inputs
and
navigation steps, usually written while developers are busy writing code.
The Test Case Specification document, the final publication by the testing team, follows a specific
format:
a) Objectives
The purpose of testing is detailed here, including relevant and crucial information.
b) Preconditions
This section lists the items and documents required before executing a particular test case. It
describes
features and conditions necessary for testing.
c) Input Specifications
Once preconditions are defined, the team collaborates to identify all inputs required for executing
the
test cases.
d) Output Specification
This includes all outputs necessary to verify the test case.
e) Post Conditions
Defines various environmental requirements, identifies any special requirements and constraints on
the
test cases, and consists of details like:
Hardware: Configuration and limitations.
Software: System, operating system, tools, etc.
Procedural Requirements: Special setup, output location & identification,
operations interventions, etc.
Reliability Assessment
One of the essential features of software development is reliability, ensuring that a software product
performs consistently under various environmental conditions. Reliability testing aims to verify whether
the software can achieve failure-free operation for a specific period in a given technological
environment.
13.1 Types of Reliability Testing
In software testing, reliability testing is categorized into three types:
a. Feature Testing
The main goal of feature testing is to assess the attributes and functionality of the software
product,
ensuring system correctness. Characteristics checked in feature testing include:
All functions need to be executed at least once by the team.
Each function must be implemented completely.
The team should check the proper implementation of each operation.
Communication between two or more functions has to be validated.
b. Regression Testing
Regression testing involves re-testing parts of the application that remain unchanged. It ensures
that
the code still functions correctly even with changes implemented during bug fixing. This type of
testing
helps identify new errors that may occur due to changes.
c. Load Testing
Load testing assesses the functionality of the software under conditions of maximum workload. It is
performed by applying a load, either less than or equal to the desired load. In load testing, "load"
refers to the number of users using the application simultaneously or sending requests to the server
at
a given time.
Software Testing Strategies
Software testing involves various strategies in software engineering to ensure the reliability and
effectiveness of a software product. Here are some important testing strategies:
Unit Testing
This basic software testing approach is followed by the programmer to test individual units of the
program. It helps developers determine whether each unit of code is working properly or not.
Integration Testing
Integration testing focuses on the construction and design of the software. The goal is to ensure
that
integrated units work seamlessly without errors. This testing strategy verifies the interaction and
collaboration between different units of the software.
System Testing
In system testing, the software is compiled as a whole and then tested comprehensively. This testing
strategy checks the overall functionality, security, portability, and other aspects of the software.
It
evaluates how the entire system behaves and performs when all components are integrated.
Verification & Validation
Software testing plays a pivotal role in the broader context of verification and validation (V&V).
Verification ensures that the software correctly implements specific functions, while validation ensures
alignment with customer requirements.
Boehm [Boe81] succinctly describes V&V:
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
V&V covers various Software Quality Assurance (SQA) activities, such as technical reviews, audits,
performance monitoring, simulation, feasibility studies, documentation reviews, and testing (e.g.,
development, usability, acceptance).
Verification Testing
Verification testing involves static activities like business and system requirement checks, design
reviews, and code walkthroughs. It ensures that the development process is creating the right
product and meets specified client requirements.
Validation Testing
Validation testing assesses both functional (Unit Testing, Integration Testing, System Testing) and
non-functional (User Acceptance Testing) aspects of the software. It is a dynamic process ensuring
that the product has been developed correctly and meets the client's business needs.
Difference between Verification and Validation Testing (Continued)
Execution:
Verification testing does not involve the execution of code; it focuses on ensuring the
development process aligns with requirements.
In validation testing, the code is executed to ensure the software meets specified
business requirements.
Bug Identification:
Verification testing is effective in identifying bugs early in the development phase.
Validation testing is crucial for catching bugs that may not be discovered in the
verification process.
Responsible Team:
Verification testing is typically executed by the Quality Assurance team, ensuring
adherence to customer requirements during development.
Validation testing is conducted by the testing team, focusing on the end product's
functionality and user acceptance.
Sequence:
Verification is performed before validation testing in the software development life
cycle.
After verification testing, the validation testing phase takes place.
Verification Testing Focus:
Verification verifies that the inputs lead to the expected outputs.
Validation ensures that the user accepts and approves the final product.
Unit Testing
A software product undergoes testing in three stages:
Unit Testing
Integration Testing
System Testing
During unit testing, individual functions or units of a program are tested. Once all units are individually
tested, they are incrementally integrated and tested at each integration step. Finally, the fully integrated
system undergoes system testing.
Unit testing occurs after coding a module is complete, and syntax errors are resolved. Typically, the module
coder conducts this activity, preparing test cases and the testing environment.
Driver and Stub Modules:
To test a single module, a complete testing environment is required for module execution, including:
Procedures from other modules that the tested module calls.
Non-local data structures.
A procedure to call module functions with appropriate parameters.
Stubs and drivers are designed to provide the necessary environment for module testing. Their roles are
illustrated in the Figure below.
Stub: A stub module consists of several stub procedures called by the module under
test.
Driver: A driver module contains non-local (global) data structures accessed by the
module under test. It also includes code to call different functions of the unit under test with
appropriate parameter values for testing.
Integration Testing
Integration testing is the second level of the software testing process that follows unit testing.
This testing phase involves testing units or individual components of the software in a group. The
primary focus is on exposing defects during the interaction between integrated components or units.
When all components or modules work independently, the data flow between dependent modules needs to
be verified, known as integration testing. The main objective is to test module interfaces, ensuring
error-free parameter passing when one module invokes another module's functionality.
During integration testing, different system modules are integrated systematically using an
integration plan that specifies steps and the order of module combinations. After each integration
step, the partially integrated system undergoes testing. Various approaches can be used for
integration testing:
Big-bang approach to integration testing: Integrates all modules in a single
step, suitable for small systems, but challenging for error localization in large systems.
Bottom-up approach to integration testing: Integrates and tests modules for
each subsystem first, then tests the subsystem, allowing testing of disjoint subsystems
simultaneously.
Top-down approach to integration testing: Starts with testing the root module,
gradually integrating and testing modules at lower layers, requiring only stubs, but may face
challenges in exercising top-level routines without lower-level routines.
Mixed approach to integration testing: A combination of top-down and bottom-up
approaches, addressing the shortcomings of each. Testing can start as modules become available
after unit testing, making it a commonly used integration testing approach. Both stubs and
drivers are required in this approach.
System Testing and Debugging
There are two widely used methods for software testing: White box testing, which
uses internal coding to design test cases, and black box testing, which uses GUI or
user perspective to develop test cases.
White box testing
Black box testing
System testing falls under Black box testing as it involves testing the external
workings of the software, following the user's perspective to identify minor defects.
After all the units of a program have been integrated and tested, system testing begins. The
procedures for system testing are the same for both object-oriented and procedural programs. System
test cases are designed based on the Software Requirements Specification (SRS) document.
There are three main types of system testing:
Alpha Testing: Conducted by the test team within the developing organization.
Beta Testing: Performed by a select group of friendly customers.
Acceptance Testing: Carried out by the customer to determine whether to accept
the delivery of the system.
In different types of system tests, the test cases may be the same, but the difference lies in who
designs and carries out the testing.
System test cases can be classified into functionality and performance test cases.
Smoke Testing
Before system testing, smoke testing is performed to check whether the main
functionalities of the software are working properly. For example, in a library automation system,
smoke tests may verify if books can be created and deleted, if member records can be created and
deleted, and if books can be loaned and returned.
Performance Testing
Performance testing checks the non-functional requirements of the system. Various types of
performance testing are considered as black-box tests.
[1] Stress Testing:
Also known as endurance (strength) testing, it imposes abnormal and illegal input conditions,
testing factors like input data volume, input data rate, processing time, memory utilization
beyond designed capacity.
[2] Volume Testing:
Checks whether data structures (buffers, arrays, queues, stacks, etc.) can handle extraordinary
situations.
[3] Configuration Testing:
Tests system behavior in various hardware and software configurations specified in the
requirements.
[4] Compatibility Testing:
Required when the system interfaces with external systems, checking if interfaces perform as
required, testing speed and accuracy of data retrieval.
[5] Regression Testing:
Required when software is maintained to fix bugs or enhance functionality and performance.
[6] Recovery Testing:
Tests the system's response to faults, loss of power, devices, services, data, etc., checking if
the system recovers satisfactorily.
[7] Maintenance Testing:
Addresses testing diagnostic programs and other procedures required to help maintain the system.
[8] Documentation Testing:
Checks whether the required user manual, maintenance manuals, and technical manuals exist and
are consistent.
[9] Usability Testing:
Concerns checking the user interface to see if it meets all user requirements, testing display
screens, messages, report formats, and other aspects related to user interface requirements.
[10] Security Testing:
Tests whether the system is foolproof against security attacks such as intrusion by hackers.
Software Maintenance
Definition: Software maintenance involves making changes to a software product
after it has been delivered to the customer.
Necessity: Maintenance is expected for various reasons such as correcting errors,
enhancing features, and adapting to new platforms.
Comparison to Physical Products: Unlike physical products that may need maintenance
due to wear and tear, software products require maintenance for different reasons.
Characteristics of Software Maintenance
Importance: Software maintenance is crucial for organizations due to reasons
like
hardware aging, software product immortality, and the need to adapt to newer platforms.
Platform Changes: When there are changes in the hardware platform, maintenance
becomes necessary, requiring rework on code.
Types of Software Maintenance
Corrective Maintenance: Addresses failures observed during system use.
Adaptive Maintenance: Needed when the software product must run on new
platforms,
operating systems, or interface with new hardware or software.
Perfective Maintenance: Required to support new features, meet customer
demands, or
enhance system performance.
Characteristics of Software Evolution
Lehman’s First Law: "A software product must change continually or become
progressively less useful."
Lehman’s Second Law: "The structure of a program tends to degrade as more and
more
maintenance is carried out on it."
Lehman’s Third Law: "The rate at which code is written or modified is
approximately
the same during development and maintenance."
19.4 Special Problems Associated with Software Maintenance
Ad Hoc Techniques: Maintenance work is often carried out using ad hoc
techniques
due to neglect in software engineering practices.
Poor Image: Software maintenance has a poor image in the industry, and
organizations may not prioritize hiring bright engineers for maintenance work.
Challenges: Despite its poor image, maintenance work can be more challenging
than
development, involving understanding and modifying someone else's work.