× back
Next Topic → ← Previous Topic

Coding

In the coding unit of the Software Engineering course, we're diving into the world of programming! The main goal is to help you become a pro at writing code. We'll start by learning how to organize our code from the big picture (Top-Down) and the small details (Bottom-Up). This helps us write code that's easy to understand. We'll also talk about Structured Programming, which is like a set of rules for making your code neat and tidy. Another cool thing we'll learn is Information Hiding, which is like keeping secrets in your code to make it work better. Then, we'll move on to Testing, which is like making sure our code works properly. We'll cover different types of testing and how to fix any mistakes (bugs) we find. Lastly, we'll talk about Software Maintenance, which is like taking care of our code and making it better over time. So, the motivation behind learning all this is to help you become a great coder and be ready for the real-world challenges of building software!

Top-Down and Bottom-Up Programming

Structured Programming

                    
Selection:
    if B then S1 else S2 
    if B then S1

Iteration:
    While B
    do S 
    repeat S 
    until B
                    
                

Information Hiding

Benefits of Information Hiding:

Programming Style

When it comes to programming style, adhering to certain rules not only enhances the readability of your code but also helps in avoiding common errors. Let's delve into some essential principles:

Internal Documentation

In the realm of software engineering, documentation extends to all written materials pertaining to the development and usage of a software product. The primary objective of effective documentation is to foster alignment among developers and stakeholders, ensuring a shared path toward achieving project objectives. Software documentation neatly falls into two main categories:

Product Documentation

  • System Documentation: Serving as a window into the system, this documentation provides an overview, aiding engineers and stakeholders in comprehending the underlying technology. It comprises essentials like the requirements document, architecture design, source code, validation docs, verification and testing info, along with a maintenance/help guide.
    • (i) Requirements Document: This cornerstone document unfolds the intricacies of system functionality, encapsulating business rules, user stories, and use cases, providing a blueprint for development.
    • (ii) Design & Architecture Document: Delving into the architectural decisions, it encompasses a design document template, architecture & design principles, user story description, solution details, and diagrammatic representation of the solution, facilitating a comprehensive understanding.
    • (iii) Source Code Document: Housing the actual source code, this component caters primarily to software engineers, providing the raw material that shapes the software product.
    • (iv) Quality Assurance Documentation: This segment incorporates various testing documents, including the strategic approach (test strategy), a snapshot of what's to be tested at a given time (test plan), detailed actions for verifying features (test case specifications), and a checklist to keep track of completed tests.
    • (v) Maintenance and Help Guide: This crucial document goes beyond problem-solving, offering insights into known issues and their solutions. It also outlines dependencies within different parts of the system, providing a guide for maintenance.
  • User Documentation: Tailored for end-users and system administrators, this branch elucidates how the software addresses their specific needs. It encompasses user-oriented resources like tutorials, FAQs, video tutorials, embedded assistance, and support portals. Additionally, it provides technical guides for system administrators, covering installation, updates, functional descriptions, and system admin guides.

Process Documentation

Process documentation takes us behind the scenes, covering all activities related to product development. It involves some upfront planning and ongoing paperwork. Common types include:

  • Plans, Estimates, and Schedules: Crafted before the project's initiation, these documents lay the groundwork for the entire development journey.
  • Reports and Metrics: Generated on a regular basis, these reports provide insights into how time and human resources are utilized during development, offering a snapshot of progress and potential areas for improvement.
  • Working Papers: Essential for recording engineers' ideas and thoughts during project implementation, these documents serve as a dynamic record of the development process.
  • Standards: This section outlines all coding and user experience (UX) standards adhered to throughout the project's progression, ensuring consistency and quality in the development process.

Testing

Software testing is a crucial process with the following key aspects:

Testing Principles

Testing principles, as suggested by Davis, along with additional insights from Everett and Meyer, guide effective testing practices:

  • Principle 1: All tests should be traceable to customer requirements, ensuring that testing uncovers errors from the customer's perspective.
  • Principle 2: Tests should be meticulously planned well in advance, initiating the planning process as soon as the requirements model is complete. Detailed test case definitions can commence post-design completion, allowing for comprehensive planning and design before any code generation begins.
  • Principle 3: Applying the Pareto principle to software testing, roughly 80% of consequences come from 20% of causes. Testers should identify and thoroughly test these critical components.
  • Principle 4: Testing should initiate "in the small" by assessing individual components before progressing to "in the large," focusing on uncovering errors in integrated components and, ultimately, the entire system.
  • Principle 5: Acknowledging the impossibility of exhaustive testing, the approach should prioritize critical paths, recognizing that testing every possible combination is not feasible.
  • Principle 6: Efforts for various module testing should align with the expectation of finding errors, ensuring a targeted and effective testing approach.
  • Principle 7: Recognizing the significance of static testing techniques, over 85% of software defects originate in software documentation. This includes requirements, specifications, code walk-throughs, and user manuals.
  • Principle 8: Tracking defects and identifying patterns in defects uncovered during testing allows for proactive problem-solving and continuous improvement.
  • Principle 9: Inclusion of test cases that demonstrate correct software behavior is essential, ensuring that the software aligns with expected functionality.

Levels of Testing

In the realm of software testing, four distinct levels of testing provide a structured approach to assess software quality:

  • Level 1: Unit Testing
  • A unit represents an individual function within the application, serving as the smallest testable part of the software. Unit testing involves analyzing each unit or component independently. As the initial level of functional testing, its primary goal is to validate individual unit components.

  • Level 2: Integration Testing
  • Integration testing focuses on testing the data flow between different modules. It comes into play after each component or module has been independently validated (Level 1). This level aims to check the data flow between dependent modules, ensuring a smooth integration. Integration testing commences once functional testing has been successfully completed.

  • Level 3: System Testing
  • System testing evaluates the entire working of the software against specified requirements. It encompasses testing both the functional and non-functional aspects of the software to ensure its overall compliance with requirements.

  • Level 4: Acceptance Testing
  • Acceptance testing, also known as User Acceptance Testing (UAT), is the final level aimed at assessing whether the software meets specified specifications and requirements. Conducted by the customer before accepting the final product, UAT is typically performed by domain experts to ensure the application aligns with business and real-time scenarios, providing customer satisfaction.

Test Plan

A test plan is a comprehensive document detailing the areas and activities of software testing. It provides an overview of the test strategy, objectives, schedule, required resources (human, software, and hardware), estimation, and deliverables. The testing manager fully monitors and controls the test plan, which is prepared collaboratively by the Test Lead (60%), Test Manager (20%), and the test engineer (20%).

Types of Test Plan

  • Master Test Plan: This plan encompasses multiple testing levels and includes a complete test strategy.
  • Phase Test Plan: Focused on a specific testing phase, addressing aspects like tools and test cases.
  • Specific Test Plans: Tailored for major types of testing such as security, load, and performance testing, emphasizing non-functional testing.

Test Plan Components or Attributes

  • Objectives: Describes the aim of the application, including modules, features, and test data.
  • Scope: Outlines what needs rigorous testing (In scope) and what doesn't (Out scope).
  • Test Methodology: Defines different testing types, like functional and integration testing.
  • Approach: Describes the flow of the application during testing for current and future reference.
  • Assumption: Contains information about potential issues during testing.
  • Risk: Identifies challenges and potential risks during the testing process.
  • Mitigation Plan or Contingency Plan: Outlines backup plans to overcome risks or issues.
  • Role & Responsibility: Defines the roles and responsibilities of the testing team members.
  • Schedule: Specifies timing and deadlines for each testing activity.
  • Defect Tracking: Discusses how defects are tracked, communicated, and their priorities.
  • Test Environments: Details the software and hardware configurations used for testing.
  • Entry and Exit Criteria: Specifies conditions for starting and stopping the testing process.
  • Test Automation: Decides which features to automate, the automation tool, and framework to be used.
  • Effort Estimation: Plans the effort required from each team member.
  • Test Deliverable: Lists the documents handed over to the customer, including test plan, test cases, scripts, etc.
  • Template: Provides templates for consistent document use during testing, such as test cases and bug reports.

Importance of Test Plan

  • The test plan serves as a rulebook, guiding thinking and ensuring adherence to a predefined strategy.
  • It helps determine the necessary efforts for validating the quality of the software application under test.
  • Key stakeholders outside the testing team, such as business managers and customers, can understand test details through the plan.
  • Important aspects like test schedule, strategy, and scope documented in the test plan are valuable for review and reuse in similar projects.

Test Case Specification

A test case is a set of conditions used by a tester to determine whether software is functioning according to customer requirements. The design of a test case includes preconditions, case name, input conditions, and expected results. Test cases are detailed documents containing all possible inputs and navigation steps, usually written while developers are busy writing code.

The Test Case Specification document, the final publication by the testing team, follows a specific format:

a) Objectives

The purpose of testing is detailed here, including relevant and crucial information.

b) Preconditions

This section lists the items and documents required before executing a particular test case. It describes features and conditions necessary for testing.

c) Input Specifications

Once preconditions are defined, the team collaborates to identify all inputs required for executing the test cases.

d) Output Specification

This includes all outputs necessary to verify the test case.

e) Post Conditions

Defines various environmental requirements, identifies any special requirements and constraints on the test cases, and consists of details like:

  • Hardware: Configuration and limitations.
  • Software: System, operating system, tools, etc.
  • Procedural Requirements: Special setup, output location & identification, operations interventions, etc.

Reliability Assessment

One of the essential features of software development is reliability, ensuring that a software product performs consistently under various environmental conditions. Reliability testing aims to verify whether the software can achieve failure-free operation for a specific period in a given technological environment.

13.1 Types of Reliability Testing

In software testing, reliability testing is categorized into three types:

a. Feature Testing

The main goal of feature testing is to assess the attributes and functionality of the software product, ensuring system correctness. Characteristics checked in feature testing include:

  • All functions need to be executed at least once by the team.
  • Each function must be implemented completely.
  • The team should check the proper implementation of each operation.
  • Communication between two or more functions has to be validated.

b. Regression Testing

Regression testing involves re-testing parts of the application that remain unchanged. It ensures that the code still functions correctly even with changes implemented during bug fixing. This type of testing helps identify new errors that may occur due to changes.

c. Load Testing

Load testing assesses the functionality of the software under conditions of maximum workload. It is performed by applying a load, either less than or equal to the desired load. In load testing, "load" refers to the number of users using the application simultaneously or sending requests to the server at a given time.

Software Testing Strategies

Software testing involves various strategies in software engineering to ensure the reliability and effectiveness of a software product. Here are some important testing strategies:

Unit Testing

This basic software testing approach is followed by the programmer to test individual units of the program. It helps developers determine whether each unit of code is working properly or not.

Integration Testing

Integration testing focuses on the construction and design of the software. The goal is to ensure that integrated units work seamlessly without errors. This testing strategy verifies the interaction and collaboration between different units of the software.

System Testing

In system testing, the software is compiled as a whole and then tested comprehensively. This testing strategy checks the overall functionality, security, portability, and other aspects of the software. It evaluates how the entire system behaves and performs when all components are integrated.

Verification & Validation

Software testing plays a pivotal role in the broader context of verification and validation (V&V). Verification ensures that the software correctly implements specific functions, while validation ensures alignment with customer requirements.

Boehm [Boe81] succinctly describes V&V:

V&V covers various Software Quality Assurance (SQA) activities, such as technical reviews, audits, performance monitoring, simulation, feasibility studies, documentation reviews, and testing (e.g., development, usability, acceptance).

Verification Testing

Verification testing involves static activities like business and system requirement checks, design reviews, and code walkthroughs. It ensures that the development process is creating the right product and meets specified client requirements.

Validation Testing

Validation testing assesses both functional (Unit Testing, Integration Testing, System Testing) and non-functional (User Acceptance Testing) aspects of the software. It is a dynamic process ensuring that the product has been developed correctly and meets the client's business needs.

Difference between Verification and Validation Testing (Continued)

  • Execution:
    • Verification testing does not involve the execution of code; it focuses on ensuring the development process aligns with requirements.
    • In validation testing, the code is executed to ensure the software meets specified business requirements.
  • Bug Identification:
    • Verification testing is effective in identifying bugs early in the development phase.
    • Validation testing is crucial for catching bugs that may not be discovered in the verification process.
  • Responsible Team:
    • Verification testing is typically executed by the Quality Assurance team, ensuring adherence to customer requirements during development.
    • Validation testing is conducted by the testing team, focusing on the end product's functionality and user acceptance.
  • Sequence:
    • Verification is performed before validation testing in the software development life cycle.
    • After verification testing, the validation testing phase takes place.
  • Verification Testing Focus:
    • Verification verifies that the inputs lead to the expected outputs.
    • Validation ensures that the user accepts and approves the final product.

Unit Testing

A software product undergoes testing in three stages:

During unit testing, individual functions or units of a program are tested. Once all units are individually tested, they are incrementally integrated and tested at each integration step. Finally, the fully integrated system undergoes system testing. Unit testing occurs after coding a module is complete, and syntax errors are resolved. Typically, the module coder conducts this activity, preparing test cases and the testing environment.

Driver and Stub Modules:

To test a single module, a complete testing environment is required for module execution, including:

Stubs and drivers are designed to provide the necessary environment for module testing. Their roles are illustrated in the Figure below.

Unit testing with the help of driver and stub modules
Unit testing with the help of driver and stub modules.

Stub: A stub module consists of several stub procedures called by the module under test.

Driver: A driver module contains non-local (global) data structures accessed by the module under test. It also includes code to call different functions of the unit under test with appropriate parameter values for testing.

Integration Testing

Integration testing is the second level of the software testing process that follows unit testing. This testing phase involves testing units or individual components of the software in a group. The primary focus is on exposing defects during the interaction between integrated components or units.

When all components or modules work independently, the data flow between dependent modules needs to be verified, known as integration testing. The main objective is to test module interfaces, ensuring error-free parameter passing when one module invokes another module's functionality.

During integration testing, different system modules are integrated systematically using an integration plan that specifies steps and the order of module combinations. After each integration step, the partially integrated system undergoes testing. Various approaches can be used for integration testing:

  • Big-bang approach to integration testing: Integrates all modules in a single step, suitable for small systems, but challenging for error localization in large systems.
  • Bottom-up approach to integration testing: Integrates and tests modules for each subsystem first, then tests the subsystem, allowing testing of disjoint subsystems simultaneously.
  • Top-down approach to integration testing: Starts with testing the root module, gradually integrating and testing modules at lower layers, requiring only stubs, but may face challenges in exercising top-level routines without lower-level routines.
  • Mixed approach to integration testing: A combination of top-down and bottom-up approaches, addressing the shortcomings of each. Testing can start as modules become available after unit testing, making it a commonly used integration testing approach. Both stubs and drivers are required in this approach.

System Testing and Debugging

There are two widely used methods for software testing: White box testing, which uses internal coding to design test cases, and black box testing, which uses GUI or user perspective to develop test cases.

  • White box testing
  • Black box testing

System testing falls under Black box testing as it involves testing the external workings of the software, following the user's perspective to identify minor defects.

After all the units of a program have been integrated and tested, system testing begins. The procedures for system testing are the same for both object-oriented and procedural programs. System test cases are designed based on the Software Requirements Specification (SRS) document.

There are three main types of system testing:

  1. Alpha Testing: Conducted by the test team within the developing organization.
  2. Beta Testing: Performed by a select group of friendly customers.
  3. Acceptance Testing: Carried out by the customer to determine whether to accept the delivery of the system.

In different types of system tests, the test cases may be the same, but the difference lies in who designs and carries out the testing.

System test cases can be classified into functionality and performance test cases.

Smoke Testing

Before system testing, smoke testing is performed to check whether the main functionalities of the software are working properly. For example, in a library automation system, smoke tests may verify if books can be created and deleted, if member records can be created and deleted, and if books can be loaned and returned.

Performance Testing

Performance testing checks the non-functional requirements of the system. Various types of performance testing are considered as black-box tests.

  1. [1] Stress Testing: Also known as endurance (strength) testing, it imposes abnormal and illegal input conditions, testing factors like input data volume, input data rate, processing time, memory utilization beyond designed capacity.
  2. [2] Volume Testing: Checks whether data structures (buffers, arrays, queues, stacks, etc.) can handle extraordinary situations.
  3. [3] Configuration Testing: Tests system behavior in various hardware and software configurations specified in the requirements.
  4. [4] Compatibility Testing: Required when the system interfaces with external systems, checking if interfaces perform as required, testing speed and accuracy of data retrieval.
  5. [5] Regression Testing: Required when software is maintained to fix bugs or enhance functionality and performance.
  6. [6] Recovery Testing: Tests the system's response to faults, loss of power, devices, services, data, etc., checking if the system recovers satisfactorily.
  7. [7] Maintenance Testing: Addresses testing diagnostic programs and other procedures required to help maintain the system.
  8. [8] Documentation Testing: Checks whether the required user manual, maintenance manuals, and technical manuals exist and are consistent.
  9. [9] Usability Testing: Concerns checking the user interface to see if it meets all user requirements, testing display screens, messages, report formats, and other aspects related to user interface requirements.
  10. [10] Security Testing: Tests whether the system is foolproof against security attacks such as intrusion by hackers.

Software Maintenance

Characteristics of Software Maintenance

  • Importance: Software maintenance is crucial for organizations due to reasons like hardware aging, software product immortality, and the need to adapt to newer platforms.
  • Platform Changes: When there are changes in the hardware platform, maintenance becomes necessary, requiring rework on code.

Types of Software Maintenance

  • Corrective Maintenance: Addresses failures observed during system use.
  • Adaptive Maintenance: Needed when the software product must run on new platforms, operating systems, or interface with new hardware or software.
  • Perfective Maintenance: Required to support new features, meet customer demands, or enhance system performance.

Characteristics of Software Evolution

  • Lehman’s First Law: "A software product must change continually or become progressively less useful."
  • Lehman’s Second Law: "The structure of a program tends to degrade as more and more maintenance is carried out on it."
  • Lehman’s Third Law: "The rate at which code is written or modified is approximately the same during development and maintenance."

19.4 Special Problems Associated with Software Maintenance

  • Ad Hoc Techniques: Maintenance work is often carried out using ad hoc techniques due to neglect in software engineering practices.
  • Poor Image: Software maintenance has a poor image in the industry, and organizations may not prioritize hiring bright engineers for maintenance work.
  • Challenges: Despite its poor image, maintenance work can be more challenging than development, involving understanding and modifying someone else's work.