Software Requirement Engineering
Software Requirement Engineering (SRE) is a critical phase in the software development
lifecycle that focuses on identifying, documenting, and managing the requirements of a software project. It
is an essential discipline that lays the foundation for the entire software development process.
Importance of Software Requirement Engineering:
SRE is of paramount importance because it serves as the bridge between the needs and expectations of
stakeholders (clients, users, and developers) and the final software product. It helps ensure that the
resulting software system meets the intended purpose and satisfies user requirements.
Key Objectives of Software Requirement Engineering:
SRE has several key objectives, including:
- Understanding and documenting user needs, system functionality, and constraints.
- Creating clear and unambiguous software requirements specifications.
- Managing and prioritizing requirements effectively throughout the development lifecycle.
- Ensuring that requirements remain consistent and traceable as the project progresses.
The Software Requirement Engineering Process:
The SRE process typically encompasses the following stages:
- Requirement Elicitation: Gathering requirements through interviews, surveys, workshops,
and other techniques.
- Requirement Analysis: Analyzing and refining gathered requirements for clarity,
feasibility, and completeness.
- Requirement Specification: Documenting requirements using structured documents,
diagrams, and models.
- Requirement Validation: Verifying that documented requirements align with stakeholder
needs and expectations.
- Requirement Management: Tracking changes, managing dependencies, and ensuring
requirements traceability.
Successful Software Requirement Engineering is a collaborative effort involving project stakeholders,
business analysts, and development teams. It helps mitigate project risks, prevent scope creep, and ensure
that the resulting software system aligns with business goals and user requirements.
Requirement Elicitation
Requirement Elicitation is a vital phase in software development that focuses on
identifying, gathering, and documenting requirements from clients and stakeholders. To understand this
process fully, it's essential to first explore Requirement Engineering, a foundational
step in the journey of software development.
1) Requirement Engineering
Requirement Engineering is a crucial phase in software development that involves
collecting, analyzing, and documenting requirements from clients. It's a four-step process:
-
1.1 Feasibility Study
In the Feasibility Study phase:
- Analysts assess the initial feasibility of the desired software functions.
- This involves a detailed study to determine if the system's functionality is feasible to
develop.
-
1.2 Requirement Gathering
If the feasibility report is positive:
- The next phase is Requirement Gathering.
- Analysts and engineers communicate with clients and end-users to understand the features
they want in the software.
-
1.3 Software Requirement Specification (SRS)
After collecting requirements:
- An Software Requirement Specification (SRS) document is created.
- It defines how the software interacts with hardware, interfaces, speed of operation,
response time, portability, security, quality, and limitations.
- This transforms client requirements into technical language.
-
1.4 Software Requirement Validation
Once requirement specifications are developed:
- Software Requirement Validation takes place.
- It ensures that requirements are valid and free from illegal or impractical solutions.
- This step helps prevent cost overruns and misinterpretations.
2) Requirement Elicitation Process
The Requirement Elicitation Process involves several key steps for effectively gathering and
managing
requirements:
-
Requirements Gathering
Requirements Gathering is where developers:
- Discuss expectations with clients and end-users to collect detailed requirements.
-
Organizing Requirements
Developers prioritize and arrange requirements:
- Based on their importance, urgency, and convenience to maintain project focus.
-
Negotiation & Discussion
If there are ambiguities or conflicts in requirements:
- They are negotiated and discussed with stakeholders to ensure clarity.
-
Documentation
Documentation involves:
- Recording all formal and informal, functional and non-functional requirements for future
reference and processing in the development lifecycle.
Requirement Elicitation Techniques
Requirement elicitation is the vital process of gathering and discovering the needs and
expectations of stakeholders for a software system. It's like uncovering the puzzle pieces that will
help build the right software. There are several techniques to accomplish this, each tailored to
different situations and preferences:
Interviews
Interviews are like friendly conversations with key people involved in the project.
They are a powerful way to collect requirements because they allow you to directly interact with
stakeholders. There are different types of interviews to choose from:
- Structured (closed) interviews: These interviews follow a predetermined set of
questions. They are like a checklist and ensure that specific information is collected.
- Non-structured (open) interviews: In these interviews, you don't have a fixed
script. You let the conversation flow, which can lead to unexpected insights and ideas.
- Oral interviews: These are spoken conversations where you discuss requirements
in person or over the phone.
- Written interviews: You provide a list of questions in writing, and
stakeholders respond in writing as well.
- One-to-one interviews: These are one-on-one conversations with individual
stakeholders, ensuring their unique needs are addressed.
- Group interviews: Multiple stakeholders are brought together for discussions.
It can be an efficient way to gather input from several people at once.
Surveys
Surveys are like questionnaires sent out to a larger group of people. They are
useful
when you want to gather information from many stakeholders efficiently. You send out a set of
questions,
and people fill them out and return them.
Questionnaires
Questionnaires are a bit like surveys, but they typically involve multiple-choice
questions. Respondents select answers from predefined options, making it easier to analyze their
responses.
Domain Analysis
Domain Analysis is about seeking help from experts who understand the specific
field
related to the software. These experts have deep knowledge of the domain, and their insights can be
invaluable in identifying requirements that may not be obvious to others.
Brainstorming
Brainstorming is a creative session where various stakeholders come together for a
free-flowing discussion. It's like a brainstorming party for ideas! All inputs and ideas are
recorded,
and it's a great way to encourage creativity.
Prototyping
Prototyping involves creating a basic model of the software and showing it to users
and
analysts. Think of it as a sneak peek. By seeing and interacting with the prototype, stakeholders
can
provide feedback on what they like or need changed.
Observation
Observation is like being a detective. A team of experts visits the client's
workplace
to observe how their current systems work. They watch the flow of tasks, how problems are solved,
and
take notes. These observations help understand what the new software needs to address.
Problem Analysis
Problem analysis is a crucial step in software development that aims to understand the needs of
clients
and users. Analysts take on the role of consultants, helping clients identify their requirements.
This
process involves breaking down complex problems into manageable subproblems and understanding their
relationships.
Methods for Problem Analysis
- Informal Approach
- The Structured Analysis Method
- Data dictionary
Informal Approach (3.1)
- The informal approach to problem analysis is flexible and doesn't rely on predefined
methodologies.
- It involves direct interactions with clients, end-users, questionnaires, studying existing
documents, and brainstorming.
- Analysts build a mental model of the problem and system, translating their understanding
into the Software Requirements Specification (SRS).
- This often includes a series of meetings where clients explain their work, environment, and
needs, with the analyst acting as a listener and later as an explainer.
- An initial draft of the SRS may be created in the final meetings.
The Structured Analysis Method (3.2)
- The structured analysis method views the system as a transformation function that takes
inputs from the environment and produces outputs.
- For complex systems, this function is divided into sub-functions to improve
understanding.
- Sub-functions can be further partitioned until each becomes easily understandable.
- Data Flow Modeling and Data Flow Diagrams (DFDs) play a key role in this approach.
Data FLow Modeling
Data Flow Modeling is a structured analysis technique that focuses on function-based
decomposition of
a
problem. It helps in understanding how a system functions by showing the flow of data. In a DFD,
a
system is seen as a function that transforms inputs into outputs.
Data Flow Diagram (DFD)
- DFD is a graphical representation of the "flow" of data through an information system,
modelling its process aspects.
Data Flow Diagrams (DFDs), also called data flow graphs, are used during problem analysis.
DFDs are
very
useful in understanding a system. They visually represent the flow of data in a system and
view it
as a
function that transforms inputs into outputs.
Components of DFD
It consists of 4 symbols that represents: data flows, data stores, processes,
and sources/sinks (or external entities).
- Data Flow: Represents the movement of data in the system, shown with
arrows.
The
arrow's tail indicates the source, and the head is the destination.
- Data Store: Denotes data storage, often a database, shown as a
rectangle with
both
smaller sides missing.
- Process: Represents tasks performed on data and is depicted as a circle
(sometimes
as a rectangle).
- Source/Sink (Entities): The origin or destination of data; sometimes
referred
to as
external entities. Anything that provides data into your system/software is External
Entity, it could be a person, system or organization.
Levels of DFD
- In software engineering, DFDs can be drawn to represent the system at different levels
of abstraction.
- Higher-level DFDs are partitioned into lower levels, revealing more information and
functional elements.
- Levels in DFDs are numbered as 0, 1, 2, or beyond.
0-Level DFD (Context Diagram)
- A 0-level DFD, also known as a context diagram, provides an overview of the entire
system in a simplified manner.
- In this diagram, the system is represented as a single process or "bubble." This
bubble symbolizes the entire system, and it is the center of attention.
- External entities, which interact with the system, are depicted as separate entities
outside the central bubble.
- Input data from external entities is illustrated by incoming arrows, while output
data from the system is represented by outgoing arrows.
- The 0-level DFD serves as a high-level abstraction that shows how the system
interacts with its external environment without delving into detailed internal
processes.
- It is a crucial starting point for understanding the system's boundaries and the
flow of data between the system and external entities.
- The Level 0 Data Flow Diagram (DFD) illustrates the core processes and data
interactions in a railway reservation system.
- Three primary entities are involved in this system:
- Passenger: Represented as a source entity, passengers initiate
actions like making reservations and cancellations.
- Railway Reservation: Represented as a central process, this
entity handles the core functionalities of the system, including reservations
and cancellations.
- Admin: Also depicted as a source entity, admins manage the
system, oversee operations, and gather information.
- Data flows from the Passenger entity to the Railway
Reservation process, indicating actions such as making reservations,
cancellations, and providing ticket information.
- Conversely, data flows from the Railway Reservation process to the
Admin entity, encompassing information like train schedules,
reservation/cancellation status, and passenger details.
- The processes in the Level 0 DFD include:
- Cancellation: Represents the process of canceling reservations
initiated by passengers.
- Reservation: Signifies the process of making reservations as
requested by passengers.
- Ticket Info: Involves the management and provision of
ticket-related information to passengers.
- Up/Down Train Info: Covers the dissemination of train schedule
information to admin for efficient system management.
- Reserve/Cancel Info: Informs the admin about reservation and
cancellation status for operational oversight.
- Passenger Info: Provides essential passenger data to the admin
for passenger management purposes.
- This Level 0 DFD serves as an overview of the railway reservation system, showcasing
the primary entities, their interactions, and key processes without diving into
detailed subprocesses.
1-level DFD
- The 1-level Data Flow Diagram (DFD) is the next step after the context diagram
(0-level DFD).
- In this level, we decompose the high-level process from the context diagram into
multiple subprocesses or bubbles.
- While the 0-level DFD provides an overview of the entire system, the 1-level DFD
focuses on breaking down the main functions of the system into more detailed
processes.
- Each bubble or process in the 1-level DFD represents a specific function or
operation within the system.
- The 1-level DFD acts as an intermediate step between the context diagram and
lower-level DFDs, allowing us to further refine and detail the system's processes.
- It provides a more granular view of how data flows between processes, entities, and
data stores within the system.
- Subprocesses identified in the 1-level DFD are typically expanded further in
subsequent levels (2-level, 3-level, etc.) to achieve a deeper understanding of the
system's operations.
2-level DFD Explanation
- The 2-level Data Flow Diagram (DFD) takes us a step further into the system's
details compared to the 1-level DFD.
- In this level, we delve deeper into the subprocesses that were identified in the
1-level DFD.
- While the 1-level DFD provides an overview of the system's main functions, the
2-level DFD offers more specific and detailed information about how those functions
work.
- It allows us to plan, document, or analyze the inner workings of the system with
greater precision.
- The 2-level DFD is particularly useful for capturing specific details of data flow,
processes, entities, and data stores within the system.
- Subprocesses identified in the 1-level DFD are further decomposed into smaller, more
manageable processes in the 2-level DFD.
- By using the 2-level DFD, we can identify the steps involved in each subprocess and
gain a clearer understanding of how data is transformed and processed.
- This level of detail is essential for designing, implementing, or troubleshooting
complex systems.
Rules for Creating DFD
- The data can't flow directly from one external entity (source) to another. There must be
a process in between. Data can flow between processes or from processes to external
entities.
- Each process should have both incoming and outgoing data flows. It's not possible for a
process to have only data flowing in or out.
- Don't display files (storage) in the 0 level of the DFD.
- Entity names should be clear and easily understandable without requiring additional
explanations.
- Processes should be numbered or listed in an ordered manner for easy reference and
understanding.
- DFDs should maintain consistency across all levels and diagrams.
- A single DFD can have a maximum of 9 processes and a minimum of 3 processes.
- The analyst should be vigilant for common errors, including:
- Unlabeled data flows.
- Missing data flows, where information required by a process is not available.
- Extraneous data flows, where some information is not being used in the process.
- Inconsistency in maintaining the diagram during refinement.
- Missing processes that should be included in the DFD.
- Containing control information when it shouldn't.
Advantages of Data Flow Diagram (DFD)
- Understanding System Functionality: DFDs help us grasp how a system
functions and its operational boundaries.
- Visual Clarity: DFDs provide a clear and visual representation, making
it easy to understand and visualize system components.
- Detailed Representation: DFDs offer a detailed and well-explained
diagram of various components within a system.
- Documentation: They are commonly used as part of system documentation
files, aiding in system analysis and design.
- Accessible to All: DFDs are comprehensible by both technical and
non-technical individuals due to their simplicity and clarity.
Data dictionary (3.3)
- A Data Dictionary is a collection of names, definitons, and attributes about data elements
that are being used or captured in a database, information system, or part of a research
project.
Definition:
- A data dictionary is a centralized repository that stores metadata about data elements,
including their names, descriptions, data types, constraints, and relationships.
Purpose:
- Data Clarity: It ensures that everyone involved in the project understands the data being
used.
- Consistency: It promotes uniformity in data naming and usage.
- Data Quality: It helps in maintaining data integrity by specifying constraints.
- Documentation: It serves as documentation for data-related decisions and definitions.
Components:
- Data Element: A specific piece of data with a unique name, e.g., "CustomerID."
- Data Type: The type of data a data element can hold (e.g., string, integer, date).
- Description: A brief explanation of what the data element represents.
- Constraints: Rules or limitations on the data (e.g., maximum length, allowed values).
- Relationships: How data elements relate to each other (e.g., foreign keys in a database).
Example:
- Let's say you're designing a database for a library management system. Here's an example of
data dictionary entries for two data elements:
- Data Element: BookID
- Data Type: Integer
- Description: A unique identifier for each book in the library.
- Constraints: Must be unique and not null.
- Data Element: Author
- Data Type: String
- Description: The name of the book's author.
- Constraints: Maximum length of 100 characters.
Benefits:
- Consistency: Ensures that data is used consistently throughout the software.
- Communication: Facilitates communication between developers, analysts, and stakeholders.
- Maintenance: Simplifies maintenance and updates to the data model.
- Data Governance: Supports data governance by defining ownership and access rights.
Drawbacks:
- Initial Effort: Creating and maintaining a data dictionary can be time-consuming.
- Complexity: For large systems, the data dictionary can become complex.
- Overhead: It adds some overhead to the development process.
Usage:
- During Requirements Analysis: It helps in understanding and specifying data requirements.
- Database Design: It aids in designing the database schema.
- Documentation: It serves as a reference for developers and analysts.
- Data Governance: It supports data governance and compliance efforts.
Object-Oriented Modeling (OOM)
- Object-Oriented Modeling (OOM) represents a paradigm shift in problem-solving. It revolves around
the idea of visualizing problems using models that are organized around real-world concepts. In OOM,
a problem is approached by identifying and representing the fundamental entities, their attributes,
and how they interact with each other in the context of the problem domain. This shift towards a
more intuitive and real-world-centric perspective makes OOM a powerful approach in software
engineering, as it allows for the creation of software systems that closely mirror the structures
and behaviors of the real world.
- Object-Oriented Models are graphical representations used in software engineering to visualize and
design complex systems.
- A well-crafted model is a powerful tool for facilitating communication and collaboration among
project teams, as it provides a common visual language.
- OOM (Object-Oriented Modeling) is particularly suitable for handling complex systems where various
components interact.
- During Object-Oriented Modeling, the focus is on the identification and organization of application
components with respect to their domain, rather than their final representation in any specific
programming language.
- Once the modeling phase is completed for an application, it can be implemented in any suitable
programming language, leveraging the design insights gained during modeling.
- OOM encourages software developers to think in terms of the application domain throughout most of
the software engineering life cycle, fostering a more holistic and domain-driven approach.
OOM Processes
- System Analysis: In this initial phase, the problem statement is formulated. An analysis
model is constructed by the analyst, highlighting the essential properties associated with the
situation. The analysis model serves as a concise and precise abstraction, outlining how the
desired
system should be developed.
- System Design: At the system design stage, the complete architecture of the system is
designed. This phase involves dividing the entire system into subsystems, based on the insights
gained from the system analysis model and the proposed overall system architecture.
- Object Design: In the object design phase, a detailed design model is developed based on
the
analysis model created earlier. Object design decisions involve specifying the data structures
and
algorithms required to implement each of the classes identified in the system.
- Final Implementation: The final implementation phase involves translating the design into
actual code. This stage includes developing classes, relationships, and other components using a
specific programming language, database, or hardware implementation, as needed.
Object-Oriented Modeling Models
- Object Model: The object model is used to describe the objects within the system and
their
relationships with each other. It provides a structural view of the system, highlighting the
various
entities and how they interact.
- Dynamic Model: The dynamic model focuses on illustrating the interactions among objects
and
the flow of information within the system. It emphasizes the behavior and temporal aspects of
the
system's operation.
- Functional Model: The functional model is concerned with defining data transformations
within
the system. It describes how data is processed and transformed as it moves through the system's
components, emphasizing the functional aspects of the system's behavior.
Features of Object-Oriented System
-
Encapsulation: Encapsulation is a fundamental concept in object-oriented programming. It
involves combining both data and the functions (or methods) that operate on that data into a
single unit called an "object." This encapsulation hides the internal details of an object from
the rest of the system and exposes only the necessary functionality through the class's methods.
It helps in maintaining data integrity and reducing complexity.
-
Abstraction: Abstraction is the process of simplifying complex reality by modeling
classes based on the essential characteristics of objects from the user's perspective. It
involves selecting the necessary attributes and methods that define an object while ignoring
irrelevant details. Abstraction allows developers to create models that are easier to understand
and work with.
-
Relationships: In an object-oriented system, classes are interconnected, and objects
don't exist in isolation. There are three primary types of object relationships:
-
Aggregation: This relationship indicates a whole-part relationship between
objects. For example, a "Car" object can be composed of "Engine," "Wheels," and other
components.
-
Association: Association represents a connection between two classes, where one
class interacts with or is somehow connected to another class. For instance, one class
may collaborate with another class to perform a specific task.
-
Generalization: Generalization implies that one class is based on another class.
It signifies that two classes share common characteristics but may also have
differences. Generalization represents an "is-a-kind-of" relationship. For example,
"Saving Account" is a kind of "Account."
-
Class and Objects: In object-oriented programming, a class is a blueprint that defines
the attributes (properties) and methods (functions) that objects created from that class will
have. Objects are instances of classes and represent specific instances of the concepts defined
by the class. Each object has its own set of attribute values.
-
Message Passing: Objects communicate with each other by sending messages. When one object
wants another object to perform a specific method, it sends a message to the target object,
initiating the desired action. This mechanism allows objects to interact and collaborate in an
object-oriented system.
-
Links and Association: Links and associations are used to depict relationships among
objects and classes:
-
Links: Links represent physical or conceptual connections between objects. For
example, a link might represent that "Student Ravi studies at GEHU."
-
Association: An association is a collection of links with a common structure and
meaning. It represents a type of relationship shared by multiple objects or classes. For
instance, "Students study at GEHU" represents an association where all links connect
students to the university.
-
Multiplicity: Multiplicity in an association specifies how many objects participate in a
particular relationship. It defines whether the relationship is one-to-one, one-to-many, or
many-to-many, indicating the number of objects involved.
-
Aggregation: Aggregation is a specialized form of association used to model "part-whole"
or "a-part-of" relationships. It represents an aggregate (the whole) that is composed of
individual parts. Aggregation helps in modeling complex structures.
-
Generalization and Inheritance: Generalization and inheritance are powerful abstractions
that enable the sharing of attributes and methods between classes:
-
Generalization: Generalization represents an "is-a-kind-of" relationship between
classes. It allows one class to inherit common characteristics and behaviors from
another class. For example, "Saving Account" is a kind of "Account."
-
Inheritance: Inheritance is the mechanism by which a class inherits attributes
and methods from a parent class through the generalization relationship. It promotes
code reusability and hierarchical structuring of classes.
Understanding Requirement Specifications
In the realm of software development, Requirement Specifications serve as the fundamental building
blocks of any project. These specifications are the bedrock upon which the entire development process is
constructed. They provide the vision, define the scope, and outline the goals of the software project.
To create effective software, it's imperative to grasp the nuances of Requirement Specifications. In
this comprehensive overview, we explore various aspects of Requirement Specifications, from their
characteristics to the categories they fall into.
Requirement Specifications
Requirement specifications are the cornerstone of software development. They represent a detailed
breakdown of what the software system needs to accomplish. To ensure the success of a software
project,
Requirement Specifications must exhibit specific characteristics, including:
- Clear: Requirements should be expressed in a way that leaves no room for ambiguity or
confusion. They must be easily understood by all stakeholders, including developers and
end-users.
- Correct: Requirements must accurately reflect the needs and expectations of the
stakeholders.
Any inaccuracies or errors can lead to costly misunderstandings later in the development
process.
- Consistent: Requirements should not conflict with each other. A consistent set of
requirements ensures that the software development process proceeds smoothly.
- Coherent: Requirements should form a logically connected and cohesive whole. They should
align with the overarching goals of the software project and make sense when viewed as a
collective
entity.
- Comprehensible: Requirements should be written in a manner that is understandable to all
parties involved. They should avoid technical jargon or complex language that might alienate
non-technical stakeholders.
- Modifiable: As the project progresses, requirements may need to be adjusted or expanded.
Requirements should be designed in a way that allows for easy modification without causing
disruption to the development process.
- Verifiable: It should be possible to verify whether the requirements have been
successfully
implemented. This verification process helps ensure that the software aligns with the initial
goals.
- Prioritized: Requirements should be ranked in order of importance. Prioritization helps
focus
development efforts on the most critical aspects of the software.
- Unambiguous: There should be no room for interpretation or misunderstanding when it comes
to
requirements. Ambiguity can lead to costly delays and errors.
- Traceable: Each requirement should be traceable to its source, allowing for a clear
understanding of its origin and purpose.
- Credible source: Requirements should originate from credible and reliable sources. The
credibility of the source is vital in ensuring that the requirements are valid and meaningful.
Non-Functional Requirements
In addition to functional requirements, which specify what the software must do, non-functional
requirements play a crucial role in shaping the overall quality and performance of the software.
Non-functional requirements encompass aspects that are not directly related to the functionality but
are
equally important. These include:
- Security: Requirements related to the security of the software, such as data protection
and
access control.
- Logging: Specifications for logging and monitoring activities within the software for
auditing and troubleshooting purposes.
- Storage: Requirements pertaining to data storage, retrieval, and management within the
software.
- Configuration: Specifications for configuring and customizing the software to meet
specific
needs.
- Performance: Requirements related to the performance and responsiveness of the software,
including speed and efficiency.
- Cost: Considerations related to the cost of developing, maintaining, and operating the
software.
- Interoperability: Requirements regarding the software's ability to work seamlessly with
other
systems and technologies.
- Flexibility: Specifications for how adaptable and flexible the software should be to
accommodate changes and updates.
- Disaster recovery: Requirements for ensuring that the software can recover from
unexpected
failures or disasters.
- Accessibility: Specifications for making the software accessible to users with
disabilities,
complying with accessibility standards.
Requirements are logically categorized based on their criticality and importance to the software
project:
- Must Have: These are requirements that are absolutely essential for the software to be
considered operational. Without these, the software cannot fulfill its primary purpose.
- Should Have: These requirements enhance the functionality of the software and contribute
to
its overall effectiveness. They are important but not critical.
- Could Have: While these requirements are desirable, the software can still function
correctly
even if they are not implemented. They provide additional value but are not essential.
- Wish List: These requirements represent desires or goals that are not directly tied to
the
core objectives of the software. They may be considered for future development but are not
currently
a priority.
User Interface Requirements
The user interface (UI) of software plays a pivotal role in user satisfaction and acceptance. A
well-designed UI contributes significantly to the overall user experience. An effective UI is one
that
is:
- Easy to operate, allowing users to perform tasks intuitively and with minimal effort.
- Quick in response, ensuring that users do not experience frustrating delays when interacting
with
the software.
- Effective in handling operational errors, providing clear guidance to users when issues arise.
- Providing a simple yet consistent user interface, maintaining a uniform look and feel throughout
the
software for a seamless user experience.
Software Requirements Specification (SRS) Plan and Documentation
- In order to form a good SRS, some points should be considered to form a structure of good SRS. These
are as follows:
Introduction
- Purpose of this Document - At first, the main aim of why this document is necessary
and what the purpose of the document is explained and described.
- Scope of this Document - In this, the overall working and main objective of the
document and what value it will provide to the customer is described and explained. It also includes
development cost and time required.
- Overview - In this, a description of the product is explained. It's simply a
summary or overall review of the product.
General Description
- General Description - This section focuses on the general functions of the product,
including the objectives for users, characteristics, features, benefits, and overall importance. It
also sheds light on the user community.
Functional Requirements
- Functional Requirements - Within this section, the document delves into the
potential outcomes of the software system, encompassing the effects resulting from software
operations. Functional requirements, which may involve calculations, data processing, and more, are
presented in a ranked order.
Interface Requirements
- Interface Requirements - This section elaborates on software interfaces,
elucidating how software programs interact with each other or users, whether through language, code,
or messaging systems.
Performance Requirements
- Performance Requirements - In this portion, the document discusses how the software
system performs desired functions under specific conditions. It also articulates the required time,
memory, maximum error rate, and related parameters.
Design Constraints
- Design Constraints - Within this section, constraints are specified and explained
for the design team, providing clarity on limitations and considerations for the project's design
phase.
Non-Functional Attributes
- Non-Functional Attributes - This part elucidates non-functional attributes
essential for the software system. Examples include security, portability, reliability, reusability,
application compatibility, data integrity, and scalability capacity.
Preliminary Schedule and Budget
- Preliminary Schedule and Budget - Here, the document outlines the initial version
and budget for the project plan, encompassing the estimated time duration and cost required for
project development.
Appendices
- Appendices - In the appendices section, additional information is provided. This
may include references from which information was gathered, definitions of specific terms, acronyms,
abbreviations, and other relevant details that contribute to a comprehensive understanding of the
SRS document.
Characteristics of a Good SRS Document
- Correctness:
An SRS is correct when it includes all the requirements needed for the system, verified by user
review.
- Completeness:
Completeness means the SRS covers everything, including numbering its pages.
- Consistency:
Consistency ensures there are no conflicts or differences in terms used in the SRS.
- Unambiguousness:
An SRS is unambiguous when each requirement has only one clear interpretation. Techniques like
diagrams and reviews help.
- Ranking for Importance and Stability:
Requirements should be ranked to prioritize them. Use identifiers to indicate their importance or
stability.
- Modifiability:
An SRS should be easy to modify when needed, with proper indexing and cross-referencing for
changes.
- Verifiability:
Verifiable SRS means there's a way to measure how well each requirement is met. Avoid
non-verifiable requirements.
- Traceability:
It should be possible to trace requirements to design and code, as well as to corresponding test
cases.
- Design Independence:
An SRS should allow choosing from different design options and should not include implementation
details.
- Testability:
The SRS should make it easy to create test cases and test plans.
- Understandable by the Customer:
Keep the language simple and avoid complex notations, as customers may not be computer experts.
- Right Level of Abstraction:
The level of detail in the SRS varies with its purpose, from detailed for requirements to less
detailed for feasibility studies.
Software Design Principles
A design methodology is a systematic approach to creating a design by applying a set of techniques and
guidelines. The design process starts after the requirements specification is ready.
Software developers determine modules within the system, with each module having a defined behavior and
interacting with others in a predefined way. The design process has two levels:
- System Design (Top-Level Design): At this level, specifications of modules and
their interconnections are decided.
- Detailed Design (Logic Design): In the second level, the internal design of the
modules is decided.
A correct system design satisfies the requirements specified in the Software Requirements Specification
(SRS).
The quality of software design is often subjective, but some properties and criteria define design
quality:
- Verifiability: Design should be verifiable, complete, and traceable.
- Efficiency: Efficiency specifies the proper use of resources, affecting development
costs. An efficient system consumes fewer resources like processor time and memory.
- Simplicity: Simplicity is crucial, focusing on how modules are interconnected and
how changes in one module affect others. The goal is to create designs that are easy to understand.
Now, let's explore some basic guiding principles for software system design:
Problem Partitioning (5.1)
For solving large problems, "divide and conquer" is a good approach. Software design divides the problem
into manageable pieces that can be solved separately, but it's important to note that these pieces
cannot be entirely independent because they form the system. Proper partitioning minimizes maintenance
costs and aids design verification.
Abstraction (5.2)
Abstraction allows designers to consider a component at an abstract level, describing external behavior
without concerning internal details. Two common abstraction mechanisms are:
- Functional Abstraction: Modules are specified by the functions they perform.
- Data Abstraction: Details of data elements are not visible to data users, forming
the basis for Object-Oriented design approaches.
Top down and bottom up-design
Introduction
In software design, choosing the right approach is crucial. Two commonly used design methodologies are
top-down and bottom-up design. These approaches dictate how a system is conceptualized and constructed.
Let's explore the key principles of each approach.
Top-Down Design
A top-down design approach begins by identifying the major components of the system. It then breaks these
components down into their lower-level counterparts, iteratively refining the design until the desired
level of detail is reached. This method offers stepwise refinement, where each step further refines the
design to a more concrete level.
Bottom-Up Design
Conversely, a bottom-up design approach starts by designing the most basic or primitive components. It
then proceeds to higher-level components, utilizing the operations of lower layers to implement more
powerful operations in higher layers.
Choosing the Right Approach
The choice between top-down and bottom-up design depends on the project's specific circumstances:
- Top-Down Approach: Suitable when system specifications are clear and the system
development starts from scratch.
- Bottom-Up Approach: Preferred when building a system based on an existing one, as
it leverages existing components.
Functional versus object-oriented approach
Software design involves choosing an approach to create a structured and efficient system. Two main
approaches are commonly used: function-oriented design and object-oriented design. Each has its unique
characteristics and applications. Let's explore these approaches in more detail.
Function-Oriented Design
In function-oriented design:
- Top-Down Decomposition: The system is seen as a black box that provides specific
services, often referred to as high-level functions.
- Example: Consider a function that creates a new member record, assigns a unique
membership number, and prints a membership bill. This high-level function can be divided into
sub-functions like assigning a membership number, creating a member record, and printing a bill.
- Centralized System State: System state data, which determines how the system
responds to user actions or events, usually has a global scope and is shared among many modules.
- Example: In a library management system, functions like creating a new member,
deleting a member, and updating member records share data such as member records for reference and
updating.
Object-Oriented Design
In object-oriented design (OOD):
- System as Objects: The system is seen as a collection of objects, each with its
own data and a set of functions (methods) responsible for managing that data.
- Data Encapsulation: Data within an object is not directly accessible by other
objects. It can only be accessed through the object's methods, promoting data security and
encapsulation.
- Decentralized System State: Unlike function-oriented design, there is no
globally shared data. Each object contains its own data, and system state is decentralized.
- Example: In a library automation software, each library member can be a
separate object with its data and functions. These objects handle their own data and
interactions.
Choosing the Right Approach
The choice between function-oriented and object-oriented design depends on the project's requirements
and complexity. Function-oriented design is a mature technology, while object-oriented design is
favored for developing large programs and promoting modularity.
Design Specifications and Verification
Why Design Specifications and Verification are Important:
Design Specifications and Verification are fundamental aspects of the software development process,
serving several crucial purposes:
- Ensuring Accuracy: They help ensure that the software system is designed and
implemented accurately, aligning with the initial requirements and goals.
- Quality Assurance: Verification serves as a quality assurance step to identify
and rectify any discrepancies or errors in the software.
- Compliance: They ensure that the software complies with regulatory
requirements, industry standards, and security protocols.
- Effective Communication: Design Specifications provide a clear communication
channel between developers, designers, and stakeholders, ensuring everyone understands the
system's intended functionality.
Design Specifications:
Design Specifications serve as a detailed blueprint for a software system's development. They
describe
how the system performs the requirements outlined in the Functional Requirements. Depending on the
system, these specifications can encompass various aspects:
- Specific Inputs: Clearly define the inputs the system will accept, including
data
types, formats, and any constraints.
- Calculations and Code: Detail the calculations or code algorithms that will be
used
to achieve the defined requirements.
- Outputs: Specify the outputs that the system will generate as a result of
processing the inputs.
- System Security Measures: Explain the technical measures and protocols in place
to
ensure the security of the system's data and functionality.
- Regulatory Compliance: Identify how the system meets any applicable regulatory
requirements, such as industry standards or legal mandates.
Design Verification:
Design Verification is a critical quality assurance process that evaluates whether a software product
aligns with the input requirements and design specifications. Key points about Design Verification
include:
- Verification serves as a quality assurance step to confirm that the software functions correctly
and
meets its intended purpose.
- It involves rigorous testing and examination to identify any discrepancies between the design
and
actual implementation.
- The primary purpose is to validate that the designed software product matches the specifications
laid out in the Design Specifications.
- Verification helps catch and rectify any deviations or errors that may have occurred during the
development process.
- It checks whether the software product achieves its goals without any defects or bugs.
Monitoring and Control in Project Management
Why Monitoring and Control are Important:
Monitoring and Control are integral aspects of effective project management, serving several critical
purposes:
- Progress Tracking: They allow project managers to track the project's progress
continuously, ensuring it adheres to the established plan and schedule.
- Milestone Identification: Key events, known as milestones, are designated to
mark significant project achievements. These milestones help measure and celebrate progress.
- Early Issue Detection: Monitoring enables the early detection of potential
delays or issues, providing the opportunity for timely corrective actions.
- Schedule Adjustments: When delays or deviations from the plan are predicted,
project managers can make necessary adjustments to schedules and plans to keep the project on
track.
- Effective Project Control: It empowers project managers to exercise control
over the project's direction and outcomes, ensuring it aligns with the project objectives.
- Use of Tools: Tools like PERT charts aid in project monitoring and control,
providing visual representations of project activities and dependencies.
Project Monitoring:
Once a project starts, the project manager continuously monitors it to ensure it progresses according
to the plan. Key points about project monitoring include:
- The project manager designates milestones, such as the completion of important activities, to
mark measurable progress.
- Milestones can include events like the preparation and review of the Software Requirements
Specification (SRS) document or the completion of coding and unit testing.
- If a delay in reaching a milestone is predicted, corrective actions may be required, including
schedule adjustments and producing updated schedules.
PERT Chart:
The Program Evaluation and Review Technique (PERT) chart is especially useful in project monitoring and
control. It provides a visual representation of project activities, dependencies, and timelines, aiding
project managers in tracking progress and making informed decisions.
Cohesiveness in Software Design
Why Cohesiveness is Important:
Cohesiveness is a fundamental concept in software design that influences the quality and
maintainability of software systems. Understanding the importance of cohesiveness is crucial
because:
- Effective Modularization: Cohesiveness plays a vital role in decomposing
complex problems into manageable modules, making software development more structured and
manageable.
- Quality of Design: High cohesion indicates that the elements within a module
belong together logically, leading to more robust and maintainable code.
- Coupling Reduction: High cohesion often correlates with low coupling, reducing
interdependence between modules and minimizing unintended side effects when making changes.
- Classification: Understanding the different types of cohesion and coupling
helps software designers make informed decisions during the design phase.
Cohesion:
A module's cohesion measures the strength of the relationship between its elements. It assesses how
well elements within a module logically belong together. Various types of cohesion exist, including:
- Coincidental Cohesion: Occurs when a module's tasks have a loose or
coincidental relationship.
- Logical Cohesion: Elements within a module perform similar operations,
indicating logical cohesion.
- Temporal Cohesion: Exists when functions in a module execute within the same
time span.
- Procedural Cohesion: Modules with functions that are part of a single procedure
or algorithm exhibit procedural cohesion.
- Communicational Cohesion: Modules where functions refer to or update the same
data structure have communicational cohesion.
- Sequential Cohesion: Modules with elements forming a sequence, where the output
of one is the input of the next, display sequential cohesion.
- Functional Cohesion: A module is functionally cohesive when its elements
cooperate to achieve a single function or purpose.
Coupling:
Coupling refers to the degree of interdependence between software modules. Understanding coupling is
crucial for designing modular and maintainable software. Different types of coupling include:
- Data Coupling: Occurs when two modules communicate through parameters.
- Stamp Coupling: Involves communication using composite data items, such as
structures in languages like C.
- Control Coupling: Exists when one module's data is used to direct the execution
order of instructions in another.
- Common Coupling: Two modules are common coupled when they share data through
global data items.
- Content Coupling: Occurs when two modules share code, such as a branch from one
module into another.
Fourth Generation Techniques (4GT) in Software Engineering
Why Fourth Generation Techniques (4GT) are Important:
Fourth Generation Techniques (4GT) are a significant aspect of software engineering that offers
unique benefits and challenges. Understanding their importance is crucial because:
- High-Level Abstraction: 4GT enables developers to specify software at a high
level using specialized languages or graphical notations, making it more accessible to
customers.
- Automated Code Generation: These techniques can automatically generate source
code based on developer specifications, saving time and reducing the chances of coding errors.
- Improved Productivity: 4GT tools can streamline the software development
process, increasing productivity and reducing development time.
- Operational Prototyping: They allow for the rapid creation of operational
prototypes, helping customers visualize and refine their requirements.
- Challenges: Proper use of 4GT requires a well-defined requirements gathering
process and design strategy to avoid issues like poor quality and maintainability.
Fourth Generation Techniques (4GT) Overview:
Fourth Generation Techniques encompass a wide range of software tools that focus on specifying
software characteristics at a high level and automatically generating source code based on these
specifications. Key points about 4GT include:
- 4GT tools use specialized language forms or graphical notations that describe problems in terms
understandable to customers.
- A typical software design environment that supports 4GT includes tools for database query,
report generation, code generation, data manipulation, high-level graphics, spreadsheet
capabilities, and automated generation of HTML.
- The 4GT process typically begins with requirements gathering, where customer requirements are
translated into an operational prototype.
- For complex projects, a design strategy is necessary even if 4GT is used to ensure quality,
maintainability, and customer acceptance.
- 4GT tools facilitate automated code generation but require a well-defined data structure and
accessibility for successful implementation.
- Transforming a 4GT implementation into a product involves testing, documentation, and solution
integration activities, similar to other software engineering paradigms.
Functional Independence in Software Design
Why Functional Independence is Important:
Functional Independence is a fundamental concept in software design that plays a crucial role in creating
well-structured and maintainable software systems. Understanding its importance is essential because:
- Error Isolation: Functional independence minimizes the likelihood of errors
propagating from one module to another. Isolated errors are easier to identify and fix.
- Scope of Reuse: Modules that are functionally independent perform well-defined
tasks with simple interfaces to other modules. This makes them highly reusable in different
programs, promoting code efficiency.
- Enhanced Understandability: Functional independence reduces design complexity,
making it easier for developers to understand and work with modules, ultimately leading to more
maintainable software.
Previous Year Questions
Attempt any two parts of choice from (a), (b) and (c).
- What are the crucial process steps of requirement engineering? Discuss with the help of diagram.
- Define coupling and cohesion and their use in determining software design strength.
- List out requirements elicitation techniques. Which one is most popular why?
Memorize the meaning and importance of requirements gathering. Name and explain the different
requirements gathering techniques that are normally developed by an analyst.
The basic goal of the requirements activity is to get an SRS that has some desirable properties. What is
the role of modeling in developing such SRS? List three major benefits that modeling provides, along
with justifications, for achieving the basic goal.
Differentiate between function-oriented design, and object-oriented design in relation to software
system design. Identify various symbols used in DFD. Discuss the various rules for designing a DFD.
Explain the software requirement and analysis. Define the need of SRS document in software develoment.
Define requirement process.
Clarify the importance of Data Modelling. Design 1-level DFD for a restaurant system.
Clarify the various components of a SRS. Differentiate between Functional vs. Object-oriented approach.
Classify the types of Cohesion and coupling. Write down the characteristics of a good SRS.