Friday, January 6, 2012

Software Engineering



1.    Q:  Define the term Software Engineering. How it is different from Computer System Engineering? [B.E. 2007C, 2008]
OR
Define the term “Software Engineering” and distinguish it from Computer Science [B. E. 2008C]

      Answer:
Software engineering is an engineering approach for software development. We can alternatively view it as a systematic collection of past experience. The experience is arranged in the form of methodologies and guidelines. Software Engineering discusses systematic and cost-effective and efficient techniques to software development. Alternatively we can define software engineering as “A discipline whose aim is the production of quality software, software that is delivered on time, within budget, and that satisfies its requirements”
In general, we assume that the software being developed would run on some general-purpose hardware platform such as a desktop computer. But, in several situations it may be necessary to develop special hardware on which the software would run. The Computer Systems Engineering addresses development of such systems requiring development of both software and specific hardware to sun the software. Thus Computer system Engineering encompasses software engineering.

2.    Q: Identify the two important techniques that software engineering uses to tackle the problem of exponential growth of problem complexity with its size.
Answer:
Software engineering principles use two important techniques to reduce problem complexity: abstraction and decomposition.
In other words, a good decomposition as shown in fig.1.5 should minimize interactions among various components.

3.   Q: Identify at least two advantages of using high-level languages over assembly languages.
     Answer:
Assembly language programs are limited to about a few hundreds of lines of assembly code, i.e. are very small in size. Every programmer develops programs in his own individual style - based on intuition. This type of programming is called Exploratory Programming. But, use of high-level programming language reduces development efforts and development time significantly. Languages like FORTRAN, ALGOL, and COBOL are the examples of high-level programming languages.

4.        Q: State at least two basic differences between control flow-oriented and data flow-oriented design techniques.
     Answer:
Control flow-oriented design deals with carefully designing the program’s control structure. A program's control structure refers to the sequence, in which the program's instructions are executed, i.e. the control flow of the program. But data flow-oriented design technique identifies:
• Different processing stations (functions) in a system
• The data items that flows between processing stations


5.    Q: State at least five advantages of object-oriented design techniques.
       Answer:
Object-oriented techniques have gained wide acceptance because of it’s:
·         Simplicity (due to abstraction)
·         Code and design reuse
·         Improved productivity
·         Better understandability
·         Better problem decomposition
·         Easy maintenance

6.    Q: Differentiate between program and Software Product. [2009C]
       Answer:
Programs are developed by individuals for their personal use. They are therefore, small in size and have limited functionality but software products are extremely large. In case of a program, the programmer himself is the sole user but on the other hand, in case of a software product, most users are not involved with the development. In case of a program, a single developer is involved but in case of a software product, a large number of developers are involved. For a program, the user interface may not be very important, because the programmer is the sole user. On the other hand, for a software product, user interface must be carefully designed and implemented because developers of that product and users of that product are totally different. In case of a program, very little documentation is expected, but a software product must be well documented. A program can be developed according to the programmer’s individual style of development, but a software product must be developed using the accepted software engineering principles.

7.    Q: What is software crisis? Give the problems of Software Crisis. [2008]
       Answer:
The software crisis has been with us since 1970.  Since then, the computer industry has progressed at a break-neck speed through the computer revolution, and recently, the network revolution triggered and/or accelerated by the explosive spread of the internet and most recently the web. Computer industry has been delivering exponential improvement in price-performance; the problems with software have not been decreasing. Within that period of time, the software industry unsuccessfully attempted to build larger and larger software product by simply existing development techniques. There are many factors that have contributed to the making of the present software crisis. Those factors are larger problem sizes, lack of adequate training in software engineering, increasing skill shortage, and low productivity improvements. So basically we can define the problems of software crisis as follows:
·         Poor quality software productions
·         Development team exceeds the budget
·         Late delivering of software
·         User requirements not completely supported by the software
·         Unreliable software
·         High cost in maintenance


8.    Q: Illustrate the terms Structured programming and Unstructured Programming. [2008]
OR
     What is structured programming? What are its Advantages? [2005]
     Answer:
A structured program should follow to distinct properties. First, a structured program uses three type of program constructs i.e. selection, sequence and iteration. Structured programs avoid unstructured control flows by restricting the use of GOTO statements. Secondly, structured program consists of a well partitioned set of modules. Structured programming uses single entry, single-exit program constructs such as if-then-else, do-while, etc. Thus, the structured programming principle emphasizes designing neat control structures for programs.

The unstructured programming is the programming style where control flow is unstructured because it uses GOTO statements.

The advantages of structured programs are:
·         Structured programs are easier to read and understand.
·         Structured programs are easier to maintain.
·         They require less effort and time for development.
·         They are amenable to easier debugging and usually fewer errors are made in the course of writing such programs.


9.    Q: What is phase Exit and Entry Criteria of Software development process?  
       Answer:
A software development life cycle has different distinct development phases such as: Feasibility Study, Requirements Analysis and Specification, Design, Coding and Unit Testing, Integration and System Testing and Maintenance. Now the phase entry and exit criteria of each phase means each and every phase has some strict rules in entering and exiting in that particular phases failing which no one is allowed to enter and exit from that phase. For example:
·         At the starting of the feasibility study, project managers or team leaders try to understand what is the actual problem by visiting the client side. At the end of that phase they pick the best solution and determine whether the solution is feasible financially and technically.
·         At the starting of requirements analysis and specification phase the required data is collected. After that requirement specification is carried out. Finally, SRS document is produced.
Same entry and exit criteria are also followed for other phases.


10. Q: What is phase containment of errors?     [2005, 2010]
     Answer:
Phase containment of errors means detect and correct errors as soon as possible. It is an important Software Engineering Principle. A software development life cycle has different distinct development phases. Phase containment of errors means detect and correct the errors within the phase where its actually lives. That is a design error should be detected and corrected within the design phase itself rather than detecting it in the coding phase. To achieve phase containment of errors we have to take periodic reviews.

11. Q: What do you mean by Exploratory Style of Programming?    
     Answer:
The Exploratory Style of Programming is a very informal style of program development approach, and there are no set rules or recommendations. Every programmer himself evolves his own software development techniques solely guided by his intuition, experience, whims and fancies. Exploratory style of programming style is possible for small size software where problem domain is initially not clear.

12. Q: What are the notable changes done by Software Engineering over Exploratory Style of Programming?    
     Answer:
The Notable changes are:
  • An important difference is that the exploratory software development style is based on error correction while the software engineering principles are primarily based on error prevention.
·         In the exploratory style, coding was considered synonymous with software development. For instance, exploratory programming style believed in developing a working system as quickly as possible and then successively modifying it until it performed satisfactorily. In the modern software development style, coding is regarded as only a small part of the overall software development activities. There are several development activities such as design and testing which typically require much more effort than coding.
·         A lot of attention is being paid to requirements specification. Significant effort is now being devoted to develop a clear specification of the problem before any development activity is started.
·         Now there is a distinct design phase where standard design techniques are employed.
·         Periodic reviews are being carried out during all stages of the development process.
·         There is better visibility of design and code. By visibility we mean production of good quality, consistent and standard documents during every phase.
·         Now, projects are first thoroughly planned. Project planning normally includes preparation of various types of estimates, resource scheduling, and development of project tracking plans.
·         Several metrics are being used to help in software project management and software quality assurance

13. Q: Differentiate between Software Process and Software Development Life Cycle. [2006, 2008]
OR
Do you mean understand by Software Process? Is it similar to Software Development Life Cycle? [2005]
       Answer:
The Software Process and Software Development Life Cycle are not same. There are some slight differences. A software life cycle model (also called process model) is a descriptive and diagrammatic representation of the software life cycle. A life cycle model represents all the activities required to make a software product transit through its life cycle phases. It also captures the order in which these activities are to be undertaken. In other words, a life cycle model maps the different activities performed on a software product from its inception to retirement. But, a Software Process is also termed as Software Process Model. The software process model is the methodology and process followed in the software life cycle. It covers only a single or at best a few individual activities involved in the development. So a Software life cycle in a nutshell superset of software process model. For example testing methodology, design methodology etc.

14. Q: Explain the problems that might be faced by an organization if it does not follow any software life cycle model.
       Answer:
The development team must identify a suitable life cycle model for the particular project and then adhere to it. Without using of a particular life cycle model the development of a software product would not be in a systematic and disciplined manner. When a software product is being developed by a team there must be a clear understanding among team members about when and what to do. Otherwise it would lead to chaos and project failure. This problem can be illustrated by using an example. Suppose a software development problem is divided into several parts and the parts are assigned to the team members. From then on, suppose the team members are allowed the freedom to develop the parts assigned to them in whatever way they like. It is possible that one member might start writing the code for his part, another might decide to prepare the test documents first, and some other engineer might begin with the design phase of the parts assigned to him. This would be one of the perfect recipes for project failure.

15. Q: What is Software Process? Why and how software process does not improve? [2003]
OR
What is Software Process? What elements can prevent a software form improving?
       Answer:
A Software Process is also termed as Software Process Model. The software process model is the methodology and process followed in the software life cycle. It covers only a single or at best a few individual activities involved in the development. For example testing methodology, design methodology etc. 
The following can prevent software from improving:
·         Imperfect/ unclear requirement Analysis & Specification
·         Improper Planning
·         Wrong Estimation of Size, Cost & Effort
·         Incorrect/ partial design of the problem domain
·         Man-power turnaround problem
·         Wrong decisions in Scheduling
·         Immature project Staffing
·         Lack of knowledge of the developer team in the technical area
·         Use of non-standard testing methodology
·         Non or incomplete documentation
 
16.  Q: What do you understand by the expression “Life Cycle model of Software development”? Why is it important to adhere to a life cycle model during the development of a large software product? [2008C]
       Answer:
A software life cycle is a series of identifiable stages that a software product undergoes during its lifetime. A software life cycle model (also called Software process model) is a descriptive and diagrammatic representation of the software life cycle. A life cycle model represents all the activities required to make a software product transit through its life cycle phases. It also captures the order in which these activities are to be undertaken.

Why it is important to adhere to life cycle model during the development of a large software product:
Software engineering is an engineering approach for software development. We can alternatively view it as a systematic collection of past experience. The experience is arranged in the form of methodologies and guidelines. A small program can be written without using software engineering principles. But if one wants to develop a large software product, then software engineering principles are indispensable to achieve a good quality software cost effectively. Software engineering principles use two important techniques to reduce problem complexity: abstraction and decomposition.

The principle of abstraction (in fig.1.4) implies that a problem can be simplified by omitting irrelevant details. Once simpler problem is solved then the omitted details can be taken into consideration to solve the next lower level abstraction. In this technique any random decomposition of a problem into smaller parts will not help. The problem has to be decomposed such that each component of the decomposed problem can be solved in solution and then the solution of the different components can be combined to get the full solution.

In other words, a good decomposition as shown in fig.1.5 should minimize interactions among various components.

17.  Q: Describe various types of software and their application domains, together with their special significance? [2008C]
     Answer:
Software has become integral part of most of the fields of human life. Software applications are grouped into eight areas for convenience as shown in fig:
·         System Software: infrastructure software comes under this category like compilers, operating systems, editors, drivers, etc. Basically system software is a collection of programs to provide service to other programs.
·         Real time Software: These software’s are used to monitor, control and analyze real world events as they occur. An example may be software required for weather forecasting. Such software will gather and process the status of temperature, humidity and other environmental parameters to forecast the weather.

·         Embedded Software: This type of software is placed in “ROM” of the product and controls various functions of the product. The product could be an aircraft, automobile, security system, signaling system etc.
·         Personal Computer Software: the software used in personal computers are covered in this category. Examples are word processors, database management, account management etc.
·         Artificial Intelligence Software: examples are expert systems, artificial neural networks etc.
·         Web based Software: examples are CGI, HTML, java, Perl.
·         Engineering and Scientific Software: examples are MATLAB, CAD/CAM packages etc.

18.  Q: What do you mean by Software Process? What problems will a software development house face if it does not follow any systematic process in its software development efforts? [2009C]
       Answer:
A software life cycle is a series of identifiable stages that a software product undergoes during its lifetime. A software life cycle model (also called Software process model) is a descriptive and diagrammatic representation of the software life cycle. A life cycle model represents all the activities required to make a software product transit through its life cycle phases. It also captures the order in which these activities are to be undertaken.
The problems that a software development house faces if it does not follow any systematic process in its software development efforts are as follows:
·         Poor quality software productions
·         Often fail to develop the required software goal.
·         Development team exceeds the budget
·         Cannot handle the manpower turn-around problem.
·         Late delivering of software
·         User requirements not completely supported by the software
·         Unreliable software production
·         High cost in maintenance

19.  Q: What do you mean by Software Life Cycle? Describe Waterfall Model. Give its advantages and disadvantages.
OR
How software life cycle provides information about the software? Explain waterfall life cycle model. [2003]
OR
What are the limitations of waterfall model? When this model is useful? [2008]

       Answer:
A software life cycle is a series of identifiable stages that a software product undergoes during its lifetime. A software life cycle model (also called Software process model) is a descriptive and diagrammatic representation of the software life cycle. A life cycle model represents all the activities required to make a software product transit through its life cycle phases. It also captures the order in which these activities are to be undertaken.

Waterfall life cycle model is divided into two classes:
·         Classical Waterfall model
·         Iterative waterfall model

The classical waterfall model is intuitively the most obvious way to develop software. Classical waterfall model divides the life cycle into the following phases as shown in fig.2.1:
·         Feasibility Study
·         Requirements Analysis and Specification
·         Design
·         Coding and Unit Testing
·         Integration and System Testing
·         Maintenance

The Iterative Waterfall model follows the same stages but feedback paths are available to its preceding stages.

Activities in each phase of the life cycle
Activities undertaken during feasibility study: -
The main aim of feasibility study is to determine whether it would be financially and technically feasible to develop the product.
·         At first project managers or team leaders try to have a rough understanding of what is required to be done by visiting the client side. They study different input data to the system and output data to be produced by the system
·         After they have an overall understanding of the problem they investigate the different solutions that are possible.
·         Then they pick the best solution and determine whether the solution is feasible financially and technically.
Activities undertaken during requirements analysis and specification: - The aim of the requirements analysis and specification phase is to understand the exact requirements of the customer and to document them properly. This phase consists of two distinct activities, namely
·         Requirements gathering and analysis, and
·         Requirements specification
Activities undertaken during design: - The goal of the design phase is to transform the requirements specified in the SRS document into a structure that is suitable for implementation in some programming language. In technical terms, during the design phase the software architecture is derived from the SRS document. Two distinctly different approaches are available: the traditional design approach and the object-oriented design approach.
·         Traditional design approach
Traditional design consists of two different activities; first a structured analysis of the requirements specification is carried out where the detailed structure of the problem is examined. This is followed by a structured design activity. During structured design, the results of structured analysis are transformed into the software design.
·         Object-oriented design approach
In this technique, various objects that occur in the problem domain and the solution domain are first identified, and the different relationships that exist among these objects are identified. The object structure is further refined to obtain the detailed design.
Activities undertaken during coding and unit testing:- The purpose of the coding and unit testing phase (sometimes called the implementation phase) of software development is to translate the software design into source code. Each component of the design is implemented as a program module. The end-product of this phase is a set of program modules that have been individually tested.
During this phase, each module is unit tested to determine the correct working of all the individual modules. It involves testing each module in isolation as this is the most efficient way to debug the errors identified at this stage.

Activities undertaken during integration and system testing: - Integration of different modules is undertaken once they have been coded and unit tested. During the integration and system testing phase, the modules are integrated in a planned manner. The different modules making up a software product are almost never integrated in one shot. Integration is normally carried out incrementally over a number of steps. During each integration step, the partially integrated system is tested and a set of previously planned modules are added to it. Finally, when all the modules have been successfully integrated and tested, system testing is carried out. The goal of system testing is to ensure that the developed system conforms to its requirements laid out in the SRS document.

System testing usually consists of three different kinds of testing activities:
§  α – testing: It is the system testing performed by the development team.
§  β – testing: It is the system testing performed by a friendly set of customers.
§  acceptance testing: It is the system testing performed by the customer himself after the product delivery to determine whether to accept or reject the delivered product.

Activities undertaken during maintenance: -
Maintenance of a typical software product requires much more than the effort necessary to develop the product itself. Many studies carried out in the past confirm this and indicate that the relative effort of development of a typical software product to its maintenance effort is roughly in the 40:60 ratio. Maintenance involves performing any one or more of the following three kinds of activities:
·         Correcting errors that were not discovered during the product development phase. This is called corrective maintenance.
·         Improving the implementation of the system and enhancing the functionalities of the system according to the customer’s requirements. This is called perfective maintenance.
·         Porting the software to work in a new environment. For example, porting may be required to get the software to work on a new computer platform or with a new operating system. This is called adaptive maintenance.

The advantages of Classical Waterfall model are:
·         It follows a rigid structure.
·         If we follow it, we can develop an error free software product.

The disadvantages of Classical Waterfall model are:
·         It is difficult to define all requirements at the beginning of a project
·         A working version of the system is not seen until late in the projects
·         It does not scale up well to large projects
·         Real projects are rarely sequential

The advantages of Iterative Waterfall model are:
·         It does not follow a rigid structure.
The disadvantages of Classical Waterfall model are:
·         It is difficult to define all requirements at the beginning of a project
·         A working version of the system is not seen until late in the projects
·        It does not scale up well to large projects
·        Real projects are rarely sequential

20.  Q: Describe a prototype Life Cycle Model? Give its advantages and disadvantages.  
OR
What is a prototype? Is it always beneficial to construct a prototype model? Does the  construction of a prototype model always increase the overall cost of Software development? Justify your answer. [2006, 2008]  
OR
What is prototype? When we need to develop a prototype? [2008]


Answer:
A prototype is a toy implementation of the system. A prototype usually exhibits limited functional capabilities, low reliability, and inefficient performance compared to the actual software. A prototype is usually built using several shortcuts. The shortcuts might involve using inefficient, inaccurate, or dummy functions. The shortcut implementation of a function, for example, may produce the desired results by using a table look-up instead of performing the actual computations. A prototype usually turns out to be a very crude version of the actual system. This model divides the life cycle of a software development process into the phases as shown below:-

There are several uses of a prototype. An important purpose is to illustrate the input data formats, messages, reports, and the interactive dialogues to the customer. This is a valuable mechanism for gaining better understanding of the customer’s needs:
• How the screens might look like
• How the user interface would behave
• How the system would produce outputs

Advantages:
  • A partial product is built in the initial stages. So customers get a chance to see the product early in the life cycle and thus give necessary feedback.
  • Requirement becomes more clear resulting into an accurate product
  • New requirements can be easily accommodated.
  • Flexibility in design and development is also supported by the model.
Disadvantages:
  • Developers in a hurry may build prototypes and end up with sub-optimal solution
  • After seeing the early prototype the users may demand the actual system to be delivered soon.
  • If end user is not satisfied with initial prototype, he may loose interest in the project.
  • Poor documentation


No, it is always not beneficial to construct a prototype model. Because, if the technical solution is clear then it will be more time consuming by using prototype. Also, it is not useful for very large projects.

Yes the construction of a prototype model always increase the overall cost of Software development because building the user prototype also effort time and money.

21.  Q: What is Evolutionary Model? Describe its merits and demerits.
OR
       What do you mean by Software Life Cycle? Describe Incremental Model. [2003]
Answer:
A software life cycle is a series of identifiable stages that a software product undergoes during its lifetime. A software life cycle model (also called Software process model) is a descriptive and diagrammatic representation of the software life cycle. A life cycle model represents all the activities required to make a software product transit through its life cycle phases. It also captures the order in which these activities are to be undertaken.
This model is also known as successive versions model. It is sometimes also termed as Incremental Model. This model is also referred to as successive versions model. In this model, the software is first broken down into several models which can be incrementally constructed and delivered. The development team first develops the core model of the system. This initial product skeleton is refined into increasing levels of capacity by adding new functionalities in successive versions. Each evolutionary version may be developed using an iterative waterfall model of development.
This model divides the life cycle of a software development process into the phases as shown below:-

Here A, B, C is modules of a software product that are incrementally developed and delivered.
Advantages:
  • Early delivery of portions of the system even though some of the requirements are not yet decided.
  • The core modules get tested thoroughly, thereby reducing chances of errors in the final product

Disadvantages:
  • For most practical problems, it is difficult to subdivide the problem into several  functional units that can be incrementally implemented and delivered
  • Model can be used only for very large problems, where it is easier to identify modules for incremental implementation




22.  Q: What is a Meta Model? List the merits and demerits of Meta Model [2006]
OR
       Why the spiral life cycle model is considered to be a Meta model? [08C, 09C]
OR
Clearly explain in brief why spiral model is called Meta model? [2010]

Answer:

The diagrammatic representation of this model appears like a spiral. The exact number of loops is not fixed. Each phase in this model is divided into four sectors. First quadrant identifies the objectives of the phase and alternative solutions possible for the phase under consideration. Second quadrant, the alternative solutions are evaluated to select the best possible solution. For the chosen solution, the potential risks are identified and dealt with by developing an appropriate prototype. A risk is essentially any adverse circumstances that might hamper the successful completion of software. The third quadrant consists of developing and verifying the next level of the product. The fourth quadrant consists of reviewing and planning the next phase. The life cycle model is called a Meta model since it encompasses all other life cycle models. However, this model is much more complex than other models.


23.  Q: Compare the Different Life Cycle Models.
OR
           Distinguish between Waterfall model and Spiral Model [2008C]
       Answer:
Comparison of Different Life Cycle Model
The Classical Waterfall model can be considered as the basic model and all other life cycle model as embellishment of this model. However, Classical Waterfall model cannot be used in practical development of the project. This problem is overcome in iterative waterfall model. The interactive waterfall model is the model widely used model. This model is simple to understand and use. However, this model is suitable only for well-understood problems; it is not suitable for very large projects and for projects that are subject to many risks. The prototyping model is suitable for projects for which either the user requirements or the underlying aspects are not well understood. This model is especially popular for development of the user-interface part of the projects. The evolutionary approach is suitable for large problems which can be decomposed into a set of modules for incremental development and delivery. This model is also widely used for object-oriented development projects. Spiral model is called a Meta model since it encompasses all other life cycle models. Risk handle is inherently built into this model.

24.  Q: What is software development life cycle? Explain different Software development life cycle with their relative merits and demerits. [2007C]
       Answer:
A software life cycle is a series of identifiable stages that a software product undergoes during its lifetime. A software life cycle model (also called Software process model) is a descriptive and diagrammatic representation of the software life cycle. A life cycle model represents all the activities required to make a software product transit through its life cycle phases. It also captures the order in which these activities are to be undertaken.

Many life cycle models have been proposed so far. Each of them has some advantages as well as some disadvantages. A few important and commonly used life cycle models are as follows:
·         Classical Waterfall Model
·         Iterative Waterfall Model
·         Prototyping Model
·         Evolutionary Model
·         Spiral Model

WRITE DOWN BREIFLY THE ARCHITECTURE, DESCRIPTION, MERITS & DEMERITS OF ALL LIFE CYCLE MODEL.


25.  Q: List the major responsibilities of a software project manager. [2007C]
       Answer:
The major responsibilities of a Software Project Manager:
Software project managers take the overall responsibility of steering a project to success. It is very difficult to objectively describe the job responsibilities of a project manager. The job responsibility of a project manager ranges from invisible activities like building up team morale to highly visible customer presentations.
Most managers take responsibility for
  • Project proposal writing
  • project cost estimation
  • Project Scheduling
  • Project staffing
  • Software process tailoring
  • Project monitoring and control
  • Software configuration management
  • Project risk management
  • Interfacing with clients
  • Managerial report writing and presentations, etc.
These activities are certainly numerous, varied and difficult to enumerate, but these activities can be broadly classified into project planning, and project monitoring and control activities. The project planning activity is undertaken before the development starts to plan the activities to be undertaken during development. The project monitoring and control activities are undertaken once the development activities start with the aim of ensuring that the development proceeds as per plan and changing the plan whenever required to cope up with the situation.


26.  Q: List the Skill necessary for Software Project Management.
       Answer:
  • A theoretical knowledge of different project management techniques is certainly necessary to become a successful project manager.
  • However, effective software project management frequently calls for good qualitative judgment and decision taking capabilities.
  • In addition to having a good grasp of the latest software project management techniques such as cost estimation, risk management, configuration management, project managers need good communication skills and the ability get work done.
  • However, some skills such as tracking and controlling the progress of the project, customer interaction, managerial presentations, and team building are largely acquired through experience.
  • None the less, the importance of sound knowledge of the prevalent project management techniques cannot be overemphasized.

27.  Q: What are the different project planning Activities? Briefly describe.
       Answer:
Once a project is found to be feasible, software project managers undertake project planning. Project planning is undertaken and completed even before any development activity starts. Project planning consists of the following essential activities:
Estimating the following attributes of the project:
Project size: What will be problem complexity in terms of the effort and time required to develop the product?
Cost: How much is it going to cost to develop the project?
Duration: How long is it going to take to complete development?
Effort: How much effort would be required?
The effectiveness of the subsequent planning activities is based on the accuracy of these estimations.
•    Scheduling manpower and other resources: After the estimations are made, the schedules for manpower and other resources have to be developed
•    Staff organization and staffing plans: Staff organization and staffing plans have to be made.
•    Risk identification, analysis, and abatement                                                                                                       planning : Risk identification, analysis and abatement planning have to be done.
•    Miscellaneous plans such as quality assurance plan, configuration management plan, etc.

28. Q: What are the contents of Software Project Management Plan (SPMP) document? Briefly describe.
       Answer:
Once project planning is complete, project managers document their plans in a Software Project Management Plan (SPMP) document. The SPMP document should discuss a list of different items that have been discussed below. This list can be used as a possible organization of the SPMP document.

Organization of the Software Project Management Plan (SPMP) Document
1. Introduction
(a) Objectives
(b) Major Functions
(c) Performance Issues
(d) Management and Technical Constraints
2. Project Estimates
(a) Historical Data Used
(b) Estimation Techniques Used
(c) Effort, Resource, Cost, and Project Duration Estimates
3. Schedule
(a) Work Breakdown Structure
(b) Task Network Representation
(c) Gantt Chart Representation
(d) PERT Chart Representation
4. Project Resources
(a) People
(b) Hardware and Software
(c) Special Resources
5. Staff Organization
(a) Team Structure
(b) Management Reporting
6. Risk Management Plan
(a) Risk Analysis
(b) Risk Identification
(c) Risk Estimation
(d) Risk Abatement Procedures
7. Project Tracking and Control Plan
8. Miscellaneous Plans
(a) Process Tailoring
(b) Quality Assurance Plan
(c) Configuration Management Plan
(d) Validation and Verification
(e) System Testing Plan
(f) Delivery, Installation, and Maintenance Plan

29.  Q: What is Sliding Window Planning?
     Answer:
Especially for large projects, it is difficult to make accurate plans. A part of this difficulty is due to the fact that the project parameters, scope of the project, project staff, etc. may change during the span of the project. In order to overcome this problem, sometimes project managers undertake project planning in stages. Planning a project over a number of stages protects managers from making big commitments too early. This technique of staggered planning is known as Sliding Window Planning. In Sliding Window Planning, starting with an initial plan, the project is planned more accurately in successive development stages. At the start of a project, project managers have incomplete knowledge about the details of the project. Their information base gradually improves as the project progresses through different phases. After the completion of every phase, the project managers can plan each subsequent phase more accurately and with increasing levels of confidence.

30.  Q: What are the different Metrics available for Project size estimation?
     Answer:
Accurate estimation of the problem size is fundamental to satisfactory estimation of effort, time duration and cost of a software project. In order to be able to accurately estimate the project size, some important metrics should be defined in terms of which the project size can be expressed. The size of a problem is obviously not the number of bytes that the source code occupies. It is neither the byte size of the executable code. The project size is a measure of the problem complexity in terms of the effort and time required to develop the product.
Currently two metrics are popularly being used widely to estimate size:
·         Lines of code (LOC)
·         Function point (FP).
The usage of each of these metrics in project size estimation has its own advantages and disadvantages.

31.  Q: What is LOC? What are its advantages and disadvantages? [B.E. 2005]
       Answer:
LOC (Lines of Codes) is the simplest among all metrics available to estimate project size. This metric is very popular because it is the simplest to use. Using this metric, the project size is estimated by counting the number of source instructions in the developed program. Obviously, while counting the number of source instructions, lines used for commenting the code and the header lines should be ignored.
Determining the LOC count at the end of a project is a very simple job. However, accurate estimation of the LOC count at the beginning of a project is very difficult. In order to estimate the LOC count at the beginning of a project, project managers usually divide the problem into modules and each module into sub modules and so on, until the sizes of the different leaf-level modules can be approximately predicted. To be able to do this, past experience in developing similar products is helpful. By using the estimation of the lowest level modules, project managers arrive at the total size estimation. So, we can mention shortcoming of LOC as:
LOC as a measure of problem size has several shortcomings:
  • LOC gives a numerical value of problem size that can vary widely with individual coding style – different programmers lay out their code in different ways. For example, one programmer might write several source instructions on a single line whereas another might split a single instruction across several lines. Of course, this problem can be easily overcome by counting the language tokens in the program rather than the lines of code. However, a more intricate problem arises because the length of a program depends on the choice of instructions used in writing the program. Therefore, even for the same problem, different programmers might come up with programs having different LOC counts. This situation does not improve even if language tokens are counted instead of lines of code.

  • A good problem size measure should consider the overall complexity of the problem and the effort needed to solve it. That is, it should consider the local effort needed to specify, design, code, test, etc. and not just the coding effort. LOC, however, focuses on the coding activity alone; it merely computes the number of source lines in the final program. We have already seen that coding is only a small part of the overall software development activities. It is also wrong to argue that the overall product development effort is proportional to the effort required in writing the program code. This is because even though the design might be very complex, the code might be straightforward and vice versa. In such cases, code size is a grossly improper indicator of the problem size.

  • LOC measure correlates poorly with the quality and efficiency of the code. Larger code size does not necessarily imply better quality or higher efficiency. Some programmers produce lengthy and complicated code as they do not make effective use of the available instruction set. In fact, it is very likely that a poor and sloppily written piece of code might have larger number of source instructions than a piece that is neat and efficient.

  • LOC metric penalizes use of higher-level programming languages, code reuse, etc. The paradox is that if a programmer consciously uses several library routines, then the LOC count will be lower. This would show up as smaller program size. Thus, if managers use the LOC count as a measure of the effort put in the different engineers (that is, productivity), they would be discouraging code reuse by engineers.

  • LOC metric measures the lexical complexity of a program and does not address the more important but subtle issues of logical or structural complexities. Between two programs with equal LOC count, a program having complex logic would require much more effort to develop than a program with very simple logic. To realize why this is so, consider the effort required to develop a program having multiple nested loop and decision constructs with another program having only sequential control flow.

  • It is very difficult to accurately estimate LOC in the final product from the problem specification. The LOC count can be accurately computed only after the code has been fully developed. Therefore, the LOC metric is little use to the project managers during project planning, since project planning is carried out even before any development activity has started. This possibly is the biggest shortcoming of the LOC metric from the project manager’s perspective.


32.  Q: What is Function point? What are its advantages and disadvantages?
       Answer:
Function point (FP)
Function point metric was proposed by Albrecht [1983]. This metric overcomes many of the shortcomings of the LOC metric. Since its inception in late 1970s, function point metric has been slowly gaining popularity. One of the important advantages of using the function point metric is that it can be used to easily estimate the size of a software product directly from the problem specification. This is in contrast to the LOC metric, where the size can be accurately determined only after the product has fully been developed.
The conceptual idea behind the function point metric is that the size of a software product is directly dependent on the number of different functions or features it supports.
Fig. 3.2: System function as a map of input data to output data

A software product supporting many features would certainly be of larger size than a product with less number of features. Each function when invoked reads some input data and transforms it to the corresponding output data. For example, the issue book feature (as shown in fig. 3.2) of a Library Automation Software takes the name of the book as input and displays its location and the number of copies available. Thus, a computation of the number of input and the output data values to a system gives some indication of the number of functions supported by the system. Albrecht postulated that in addition to the number of basic functions that software performs, the size is also dependent on the number of files and the number of interfaces.

Besides using the number of input and output data values, function point metric computes the size of a software product (in units of functions points or FPs) using three other characteristics of the product as shown in the following expression. The size of a product in function points (FP) can be expressed as the weighted sum of these five problem characteristics. The weights associated with the five characteristics were proposed empirically and validated by the observations over many projects. Function point is computed in two steps. The first step is to compute the unadjusted function point (UFP).
UFP = (Number of inputs)*4 + (Number of outputs)*5 + (Number of inquiries)*4 + (Number of files)*10 + (Number of interfaces)*7

Number of inputs: Each data item input by the user is counted. Data inputs should be distinguished from user inquiries. Inquiries are user commands such as print-account-balance. Inquiries are counted separately. It must be noted that individual data items input by the user are not considered in the calculation of the number of inputs, but a group of related inputs are considered as a single input.

For example, while entering the data concerning employee to employee pay roll software; the data items name, age, sex, address, phone number, etc. are together considered as a single input. All these data items can be considered to be related, since they pertain to a single employee.

Number of outputs: The outputs considered refer to reports printed, screen outputs, error messages produced, etc. While outputting the number of outputs the individual data items within a report are not considered, but a set of related data items is counted as one input.

Number of inquiries: Number of inquiries is the number of distinct interactive queries which can be made by the users. These inquiries are the user commands which require specific action by the system.

Number of files: Each logical file is counted. A logical file means groups of logically related data. Thus, logical files can be data structures or physical files.

Number of interfaces: Here the interfaces considered are the interfaces used to exchange information with other systems. Examples of such interfaces are data files on tapes, disks, communication links with other systems etc.

Once the unadjusted function point (UFP) is computed, the technical complexity factor (TCF) is computed next. TCF refines the UFP measure by considering fourteen other factors such as high transaction rates, throughput, and response time requirements, etc. Each of these 14 factors is assigned from 0 (not present or no influence) to 5 (strong influence/ Essential). The resulting numbers are summed, yielding the total degree of influence (DI). Now, TCF is computed as (0.65+0.01*DI). As DI can vary from 0 to 70, TCF can vary from 0.65 to 1.35. Finally, FP=UFP*TCF.

Advantages:
·         This approach is independent of the language, tools or methodologies used for implementations
·         Function points can be estimated from requirement specification or design specification, thus making it possible to estimate development effort in early phases of development.
·         Function points are directly linked to the statement of requirements.

Disadvantages:
A major shortcoming of the function point measure is that it does not take into account the algorithmic complexity of software. That is, the function point metric implicitly assumes that the effort required to design and develop any two functionalities of the system is the same. But, we know that this is normally not true, the effort required to develop any two functionalities may vary widely. It only takes the number of functions that the system supports into consideration without distinguishing the difficulty level of developing the various functionalities. To overcome this problem, an extension of the function point metric called feature point metric is proposed.

33. Q: What is Feature point metric? What are its advantages and disadvantages?
       Answer:
Feature point metric
A major shortcoming of the function point measure is that it does not take into account the algorithmic complexity of software. But, we know that this is normally not true, the effort required to develop any two functionalities may vary widely. It only takes the number of functions that the system supports into consideration without distinguishing the difficulty level of developing the various functionalities. To overcome this problem, an extension of the function point metric called feature point metric is proposed.
Feature point metric incorporates an extra parameter algorithm complexity. This parameter ensures that the computed size using the feature point metric reflects the fact that the more is the complexity of a function, the greater is the effort required to develop it and therefore its size should be larger compared to simpler functions.

34.  Q: What is the LOC for the given program?
       Answer:
The above program contains 18 lines of Code and one of which is a comment line. So, LOC for the above given program is 17. i.e. 17 LOC.

35.  Q: Consider a project with the following functional units:
                 Number of user inputs                =50
                 Number of user outputs              =40
                 Number of user enquires             =35
                 Number of user files                   =06
                 Number of external interfaces     =04

Assume all complexity adjustment factors and weighting factors are average. Compute the function point for the project.
    
     Answer:
 We know:
            UFP     = 50 x 4 + 40 x 5 + 35 x 4 +06 x 10 +04 x 7= 628
            TCF     = ( 0.65 + 0.01(14 x 3) ) = 1.07
            FP= UFP x TCF = 628 x 1.07 = 672


36. Q: What are the different functional units used in Function Point Estimation?
Answer:
The functional units used in FP estimation are classified as Low, Average and High based on the complexity of the software product. The weighting factors are:


Functional Units
Weighting Factors
Low
Average
High
Inputs(I)
3
4
6
Outputs(O)
4
5
7
Inquiry(E)
3
4
6
Number of files(F)
7
10
15
Number of interfaces(IF)
5
7
10

The 14 TCF factors used in FP estimation are classified in a rating from 0 to 5.
They are
·         0 ( no influence)
·         1 ( Incidental)
·         2 ( Moderate)
·         3 ( Average)
·         4 ( Signification)
·         5 ( Essential)

37. Q: What are the different project estimation techniques?
Answer:
Estimation of various project parameters is a basic project planning activity. The important project parameters that are estimated include: project size, effort required to develop the software, project duration, and cost. These estimates not only help in quoting the project cost to the customer, but are also useful in resource planning and scheduling. There are three broad categories of estimation techniques:
• Empirical estimation techniques
• Heuristic techniques
• Analytical estimation techniques



38. Q: What is an empirical estimation technique? What are different empirical estimation techniques?
Answer:
Empirical estimation techniques are based on making an educated guess of the project parameters. While using this technique, prior experience with development of similar products is helpful. Although empirical estimation techniques are based on common sense, different activities involved in estimation have been formalized over the years. Two popular empirical estimation techniques are: Expert judgment technique and Delphi cost estimation.

Expert Judgment Technique
Expert judgment is one of the most widely used estimation techniques. In this approach, an expert makes an educated guess of the problem size after analyzing the problem thoroughly. Usually, the expert Estimates the cost of the different components (i.e. modules or subsystems) of the system and then combines them to arrive at the overall estimate. However, this technique is subject to human errors and individual bias. Also, it is possible that the expert may overlook some factors inadvertently. Further, an expert making an estimate may not have experience and knowledge of all aspects of a project. For example, he may be conversant with the database and user interface parts but may not be very knowledgeable about the computer communication part. A more refined form of expert judgment is the estimation made by group of experts. Estimation by a group of experts minimizes factors such as individual oversight, lack of familiarity with a particular aspect of a project, personal bias, and the desire to win contract through overly optimistic estimates. However, the estimate made by a group of experts may still exhibit bias on issues where the entire group of experts may be biased due to reasons such as political considerations. Also, the decision made by the group may be dominated by overly assertive members.

Delphi cost estimation
Delphi cost estimation approach tries to overcome some of the shortcomings of the expert judgment approach. Delphi estimation is carried out by a team comprising of a group of experts and a coordinator. In this approach, the coordinator provides each estimator with a copy of the software requirements specification (SRS) document and a form for recording his cost estimate. Estimators complete their individual estimates anonymously and submit to the coordinator. In their estimates, the estimators mention any unusual characteristic of the product which has influenced his estimation. The coordinator prepares and distributes the summary of the responses of all the estimators, and includes any unusual rationale noted by any of the estimators. Based on this summary, the estimators re-estimate. This process is iterated for several rounds. However, no discussion among the estimators is allowed during the entire estimation process. The idea behind this is that if any discussion is allowed among the estimators, then many estimators may easily get influenced by the rationale of an estimator who may be more experienced or senior. After the completion of several iterations of estimations, the coordinator takes the responsibility of compiling the results and preparing the final estimate.

39. Q: What is a Heuristic estimation technique? What are different Heuristic estimation techniques?
Answer:
Heuristic techniques assume that the relationships among the different project parameters can be modeled using suitable mathematical expressions. Once the basic (independent) parameters are known, the other (dependent) parameters can be easily determined by substituting the value of the basic parameters in the mathematical expression. Different heuristic estimation models can be divided into the following two classes: single variable model and the multi variable model.
Single variable estimation models provide a means to estimate the desired characteristics of a problem, using some previously estimated basic (independent) characteristic of the software product such as its size. A single variable estimation model takes the following form:
Estimated Parameter = c1 * ed1
In the above expression, e is the characteristic of the software which has already been estimated (independent variable). Estimated Parameter is the dependent parameter to be estimated. The dependent parameter to be estimated could be effort, project duration, staff size, etc. c1 and d1 are constants. The values of the constants c1 and d1 are usually determined using data collected from past projects (historical data). The basic COCOMO model is an example of single variable cost estimation model. A multivariable cost estimation model takes the following form:
Estimated Resource = c1*e1d1 + c2*e2d2 + ...
Where e1, e2, … are the basic (independent) characteristics of the software already estimated, and c1, c2, d1, d2, … are constants. Multivariable estimation models are expected to give more accurate estimates compared to the single variable models, since a project parameter is typically influenced by several independent parameters. The independent parameters influence the dependent parameter to different extents. This is modeled by the constants c1, c2, d1, d2, … . Values of these constants are usually determined from historical data. The intermediate COCOMO model can be considered to be an example of a multivariable estimation model.

40. Q: What is a Halstead Software Science? What are different estimation techniques?
Answer:
Halstead’s Software Science – An Analytical Technique  
Halstead’s software science is an analytical technique to measure size, development effort, and development cost of software products. Halstead used a few primitive program parameters to develop the expressions for over all program length, potential minimum value, actual volume, effort, and development time.
For a given program, let:
􀂃 η1 be the number of unique operators used in the program,
􀂃 η2 be the number of unique operands used in the program,
􀂃 N1 be the total number of operators used in the program,
􀂃 N2 be the total number of operands used in the program.

Length and Vocabulary
The length of a program as defined by Halstead, quantifies total usage of all operators and operands in the program. Thus, program length N = N1 +N2. The program vocabulary is the number of unique operators and operands used in the program. Thus, program vocabulary η = η1 + η2.
Program Volume
The length of a program (i.e. the total number of operators and operands used in the code) depends on the choice of the operators and operands used. Thus, while expressing program size, the programming language used must be taken into consideration:
V = N*log2η
Here the program volume V is the minimum number of bits needed to encode the program. In fact, to represent η different identifiers uniquely, at least log2η bits (where η is the program vocabulary) will be needed. In this scheme, Nlog2η bits will be needed to store a program of length N. Therefore, the volume V represents the size of the program by approximately compensating for the effect of the programming language used.

Potential Minimum Volume
V* = (2 + η2)log2(2 + η2).
The program level L is given by L = V*/V. The concept of program level L is introduced in an attempt to measure the level of abstraction provided by the programming language. Using this definition, languages can be ranked into levels that also appear intuitively correct.

Effort and Time
The effort required to develop a program can be obtained by dividing the program volume with the level of the programming language used to develop the code. Thus, effort E = V/L, where E is the number of mental discriminations required to implement the program and also the effort required to read and understand the program. Thus, the programming effort E = V²/V* (since L = V*/V) varies as the square of the volume. Experience shows that E is well correlated to the effort needed for maintenance of an existing program. The programmer’s time T = E/S, where S the speed of mental discriminations. The value of S has been empirically developed from psychological reasoning, and its recommended value for programming applications is 18.

41. Q: Let us consider the following C program.
main( )
{
int a, b, c, avg;
scanf(“%d %d %d”, &a, &b, &c);
avg = (a+b+c)/3;
printf(“avg = %d”, avg);
                                       }

Find out the estimated length, and program volume of the above given program.
Answer:
The unique operators are:
main,(),{},int,scanf,&,“,”,“;”,=,+,/, printf

The unique operands are:
a, b, c, &a, &b, &c, a+b+c, avg, 3,
“%d %d %d”, “avg = %d”

Therefore, η1 = 12, η2 = 11
Estimated Length = (12*log12 + 11*log11)
= (12*3.58 + 11*3.45)
= (43+38) = 81
Volume = Length*log(23) = 81*4.52 = 366


42. Q: What are the different classifications of Software development?
Answer:
Boehm postulated that any software development project can be classified into one of the following three categories based on the development complexity: organic, semidetached, and embedded. Boehm not only considered the characteristics of the product but also those of the development team and development environment. Boehm’s [1981] definition of organic, semidetached, and embedded systems are elaborated below.
Organic: A development project can be considered of organic type, if the project deals with developing a well understood application program, the size of the development team is reasonably small, and the team members are experienced in developing similar types of projects.
Semidetached: A development project can be considered of semidetached type, if the development consists of a mixture of experienced and inexperienced staff. Team members may have limited experience on related systems but may be unfamiliar with some aspects of the system being developed.
Embedded: A development project is considered to be of embedded type, if the software being developed is strongly coupled to complex hardware, or if the stringent regulations on the operational procedures exist.

43. Q: What is COCOMO? What are different types of COCOMO model? State the Basic COCOMO model.
Answer:
COCOMO (Constructive Cost Estimation Model) was proposed by Boehm.
According to Boehm, software cost estimation should be done through three stages: Basic COCOMO, Intermediate COCOMO, and Complete COCOMO.

Basic COCOMO Model
The basic COCOMO model gives an approximate estimate of the project parameters. The basic COCOMO estimation model is given by the following expressions:
Effort = a1 х (KLOC)a2 PM
Tdev = b1 x (Effort)b2 Months
Where
• KLOC is the estimated size of the software product expressed in Kilo Lines of Code,

• a1, a2, b1, b2 are constants for each category of software products,

• Tdev is the estimated time to develop the software, expressed in months,

• Effort is the total effort required to develop the software product, expressed in person months (PMs).


The effort estimation is expressed in units of person-months (PM). It is the area under the person-month plot (as shown in fig. 11.3). It should be carefully noted that an effort of 100 PM does not imply that 100 persons should work for 1 month nor does it imply that 1 person should be employed for 100 months, but it denotes the area under the person-month curve (as shown in fig. 11.3).
According to Boehm, every line of source text should be calculated as one LOC irrespective of the actual number of instructions on that line. Thus, if a single instruction spans several lines (say n lines), it is considered to be nLOC. The values of a1, a2, b1, b2 for different categories of products (i.e. organic, semidetached, and embedded) as given by Boehm [1981] are summarized below. He derived the above expressions by examining historical data collected from a large number of actual projects.

Estimation of development effort : For the three classes of software products, the formulas for estimating the effort based on the code size are shown below:
Organic:                     Effort = 2.4(KLOC) 1.05 PM
Semi-detached:          Effort = 3.0(KLOC) 1.12 PM
Embedded:                Effort = 3.6(KLOC) 1.20 PM
Estimation of development time: For the three classes of software products, the formulas for estimating the development time based on the effort are given below:
Organic:                     Tdev = 2.5(Effort) 0.38 Months
Semi-detached:          Tdev = 2.5(Effort) 0.35 Months
               Embedded:                Tdev = 2.5(Effort) 0.32 Months

44. Q: What are different problems associated with Basic COCOMO model?
Answer:
Some insight into the basic COCOMO model can be obtained by plotting the estimated characteristics for different software sizes. Fig. 11.4 shows a plot of estimated effort versus product size. From fig. 11.4, we can observe that the effort is somewhat super linear in the size of the software product. Thus, the effort required to develop a product increases very rapidly with project size.

The development time versus the product size in KLOC is plotted in fig. 11.5. From fig. 11.5, it can be observed that the development time is a sublinear function of the size of the product, i.e. when the size of the product increases by two times, the time to develop the product does not double but rises moderately. This can be explained by the fact that for larger products, a larger number of activities which can be carried out concurrently can be identified. The parallel activities can be carried out simultaneously by the engineers. This reduces the time to complete the project. Further, from fig. 11.5, it can be observed that the development time is roughly the same for all the three categories of products. For example, a 60 KLOC program can be developed in approximately 18 months, regardless of whether it is of organic, semidetached, or embedded type.

From the effort estimation, the project cost can be obtained by multiplying the required effort by the manpower cost per month. But, implicit in this project cost computation is the assumption that the entire project cost is incurred on account of the manpower cost alone. In addition to manpower cost, a project would incur costs due to hardware and software required for the project and the company overheads for administration, office space, etc. it is important to note that the effort and the duration estimations obtained using the COCOMO model are called as nominal effort estimate and nominal duration estimate. The term nominal implies that if anyone tries to complete the project in a time shorter than the estimated duration, then the cost will increase drastically. But, if anyone completes the project over a longer period of time than the estimated, then there is almost no decrease in the estimated cost value.

45. Q: Assume that the size of an organic type software product has been estimated to be 32,000 lines of source code. Assume that the average salary of software engineers be Rs. 15,000/- per month. Determine the effort required to develop the software product and the nominal development time.
Answer:
We know,
Effort = 2.4 х (32)1.05 = 91 PM
Nominal development time = 2.5 х (91)0.38 = 14 months
Cost required to develop the product         = 14 х 15,000
= Rs. 210,000/-

46. Q: A project size of 200 KLOC is to be developed. Software development team has average experience on similar type of projects. The project schedule is not very tight. Calculate the effort, development time, average staff size and productivity of the project.
Answer:
The semi-detached model is the most appropriate mode; keeping in view the size, schedules and experience of the development team,
Hence,
Effort = 3.0 х (200)1.12 = 1133.12 PM =E
Nominal development time = 2.5 х (1133.12)0.35= 29.3 months=D
Average Staff size (SS)          = E/D=1133.12/29.3=38.67 Persons
Productivity (P) = 200/1133.12=0.1765 KLOC/PM=176 LOC/PM

47.    Q: Suppose that a project was estimated to be 400 KLOC. Calculate the effort and development time for each of the three modes, i.e. Organic, Semi-detached and Embedded.
Answer:
The basic COCOMO equations take the form:
                           Effort (E) = a х (KLOC)b
                           Development time (D) = c х (E)d
Estimated size of the project is=400 KLOC
      (I).    Organic mode
                           Effort (E) = 2.4 х (400)1.05= 1295.31 PM
                           Development time (D) = 2.5 х (1295.31)0.38= 38.07 M

     (II).    Semidetached mode
                           Effort (E) = 3.0 х (400)1.12= 2462.79 PM
                           Development time (D) = 2.5 х (2462.79)0.35= 38.45 M
   (III).    Embedded mode
                           Effort (E) = 3.6 х (400)1.20= 4772.81 PM
                           Development time (D) = 2.5 х (4772.81)0.38= 38 M

48. Q: How Intermediate and Complete COCOMO model concepts works?
Answer:
Intermediate COCOMO model
The basic COCOMO model assumes that effort and development time are functions of the product size alone. However, a host of other project parameters besides the product size affect the effort required to develop the product as well as the development time. Therefore, in order to obtain an accurate estimation of the effort and project duration, the effect of all relevant parameters must be taken into account. The intermediate COCOMO model recognizes this fact and refines the initial estimate obtained using the basic COCOMO expressions by using a set of 15 cost drivers (multipliers) based on various attributes of software development. For example, if modern programming practices are used, the initial estimates are scaled downward by multiplication with a cost driver having a value less than 1. If there are stringent reliability requirements on the software product, this initial estimate is scaled upward. Boehm requires the project manager to rate these 15 different parameters for a particular project on a scale of one to three. Then, depending on these ratings, he suggests appropriate cost driver values which should be multiplied with the initial estimate obtained using the basic COCOMO. In general, the cost drivers can be classified as being attributes of the following items:
  • Product: The characteristics of the product that are considered include the inherent complexity of the product, reliability requirements of the product, etc.
  • Computer: Characteristics of the computer that are considered include the execution speed required, storage space required etc.
  • Personnel: The attributes of development personnel that are considered include the experience level of personnel, programming capability, analysis capability, etc.
  • Development Environment: Development environment attributes capture the development facilities available to the developers. An important parameter that is considered is the sophistication of the automation (CASE) tools used for software development.

Complete COCOMO model
A major shortcoming of both the basic and intermediate COCOMO models is that they consider a software product as a single homogeneous entity. However, most large systems are made up several smaller sub-systems. These sub-systems may have widely different characteristics. For example, some sub-systems may be considered as organic type, some semidetached, and some embedded. Not only that the inherent development complexity of the subsystems may be different, but also for some subsystems the reliability requirements may be high, for some the development team might have no previous experience of similar development, and so on. The complete COCOMO model considers these differences in characteristics of the subsystems and estimates the effort and development time as the sum of the estimates for the individual subsystems. The cost of each subsystem is estimated separately. This approach reduces the margin of error in the final estimate.
The following development project can be considered as an example application of the complete COCOMO model. A distributed Management Information System (MIS) product for an organization having offices at several places across the country can have the following sub-components:
• Database part
• Graphical User Interface (GUI) part
• Communication part
Of these, the communication part can be considered as embedded software. The database part could be semi-detached software, and the GUI part organic software. The costs for these three components can be estimated separately, and summed up to give the overall cost of the system.

49. Q: What do you mean by Staffing level estimation? Describe the Putnam’s work for staffing level estimation.
Answer:
Staffing level estimation
Once the effort required to develop a software has been determined, it is necessary to determine the staffing requirement for the project. Putnam first studied the problem of what should be a proper staffing pattern for software projects. He extended the work of Norden who had earlier investigated the staffing pattern of research and development (R&D) type of projects. In order to appreciate the staffing pattern of software projects, Norden’s and Putnam’s results must be understood.

Putnam’s Work
Putnam studied the problem of staffing of software projects and found that the software development has characteristics very similar to other R & D projects studied by Norden and that the Rayleigh-Norden curve can be used to relate the number of delivered lines of code to the effort and the time required to develop the project. By analyzing a large number of army projects, Putnam derived the following expression:
L = Ck K1/3td4/3
The various terms of this expression are as follows:
• K is the total effort expended (in PM) in the product development and L is the product size in KLOC.
• td corresponds to the time of system and integration testing. Therefore, td can be approximately considered as the time required to develop the software.
  • Ck is the state of technology constant and reflects constraints that impede the progress of the programmer. Typical values of Ck = 2 for poor development environment (no methodology, poor documentation, and review, etc.), Ck = 8 for good software development environment (software engineering principles are adhered to), Ck = 11 for an excellent environment (in addition to following software engineering principles, automated tools and techniques are used). The exact value of Ck for a specific project can be computed from the historical data of the organization developing it.

Putnam suggested that optimal staff build-up on a project should follow the Rayleigh curve. Only a small number of engineers are needed at the beginning of a project to carry out planning and specification tasks. As the project progresses and more detailed work is required, the number of engineers reaches a peak. After implementation and unit testing, the number of project staff falls. However, the staff build-up should not be carried out in large installments. The team size should either be increased or decreased slowly whenever required to match the Rayleigh-Norden curve. Experience shows that a very rapid build up of project staff any time during the project development correlates with schedule slippage. It should be clear that a constant level of manpower through out the project duration would lead to wastage of effort and increase the time and effort required to develop the product. If a constant number of engineers are used over all the phases of a project, some phases would be overstaffed and the other phases would be understaffed causing inefficient use of manpower, leading to schedule slippage and increase in cost.

50. Q: Describe the Norden’s work for staffing level estimation. Give the drawback of Putnam’s work.
Answer:
Norden studied the staffing patterns of several R & D projects. He found that the staffing pattern can be approximated by the Rayleigh distribution curve (as shown in fig. 11.6). Norden represented the Rayleigh curve by the following equation:
E = K/t²d * t * e-t² / 2 t²d
Where E is the effort required at time t. E is an indication of the number of engineers (or the staffing level) at any particular time during the duration of the project, K is the area under the curve, and td is the time at which the curve attains its maximum value.
It must be remembered that the results of Norden are applicable to general R & D projects and were not meant to model the staffing pattern of software development projects.

Drawback of Putnam’s Works:
By analyzing a large number of army projects, Putnam derived the following expression:
L = CkK1/3td4/3
Where, K is the total effort expended (in PM) in the product development and L is the product size in KLOC, td corresponds to the time of system and integration testing and Ck is the state of technology constant and reflects constraints that impede the progress of the programmer .
Now by using the above expression it is obtained that,
K = L3/(Ck)3(td)3
Or,
K = C/td4
For the same product size, C = L3 / Ck3 is a constant.
or, K1/K2 = (td2)4*(td1)4
or, K 1/td4
or, cost 1/td
(as project development effort is equally proportional to project development cost)
From the above expression, it can be easily observed that when the schedule of a project is compressed, the required development effort as well as project development cost increases in proportion to the fourth power of the degree of compression. It means that a relatively small compression in delivery schedule can result in substantial penalty of human effort as well as development cost. For example, if the estimated development time is 1 year, then in order to develop the product in 6 months, the total effort required to develop the product (and hence the project cost) increases 16 times.

51. Q: A software project is planned to cost 95 PY in a period of 1 year and 9 months. Calculate the peak manning and average rate of software team build up.
Answer:
Software project cost, K=95 PY
Peak development time, td =1.75 Years
Peak manning, m0 = K/ ( tdx(e)1/2)=95/(1.75x1.648)=33 Persons
Average rate of software team build up= m0/ td=33/1.75=18.8 Person/Years

52. Q: What are the different steps to perform project scheduling?
Answer:
Project-task scheduling is an important project planning activity. It involves deciding which tasks would be taken up when. In order to schedule the project activities, a software project manager needs to do the following:
1. Identify all the tasks needed to complete the project.
2. Break down large tasks into small activities.
3. Determine the dependency among different activities.
4. Establish the most likely estimates for the time durations necessary to complete the activities.
5. Allocate resources to activities.
6. Plan the starting and ending dates for various activities.
7. Determine the critical path. A critical path is the chain of activities that determines the duration of the project.

The first step in scheduling a software project involves identifying all the tasks necessary to complete the project. Next, the large tasks are broken down into a logical set of small activities which would be assigned to different engineers. The work breakdown structure formalism helps the manager to breakdown the tasks systematically. After the project manager has broken down the tasks and created the work breakdown structure, he has to find the dependency among the activities. The dependency among the activities is represented in the form of an activity network. Once the activity network representation has been worked out, resources are allocated to each activity. Resource allocation is typically done using a Gantt chart. After resource allocation is done, a PERT chart representation is developed. The PERT chart representation is suitable for program monitoring and control. For task scheduling, the project manager needs to decompose the project tasks into a set of activities. The time frame when each activity is to be performed is to be determined. The end of each activity is called milestone. The project manager tracks the progress of a project by monitoring the timely completion of the milestones. If he observes that the milestones start getting delayed, then he has to carefully control the activities, so that the overall deadline can still be met.

53. Q: What is Work breakdown structure?
Answer:
Work Breakdown Structure (WBS) is used to decompose a given task set recursively into small activities. WBS provides a notation for representing the major tasks need to be carried out in order to solve a problem. The root of the tree is labeled by the problem name. Each node of the tree is broken down into smaller activities that are made the children of the node. Each activity is recursively decomposed into smaller sub-activities until at the leaf level, the activities requires approximately two weeks developing. Fig. 3.7 represents the WBS of MIS (Management Information System) software. While breaking down a task into smaller tasks, the manager has to make some hard decisions. If a task is broken down into large number of very small activities, these can be carried out independently. Thus, it becomes possible to develop the product faster (with the help of additional manpower). Therefore, to be able to complete a project in the least amount of time, the manager needs to break large tasks into smaller ones, expecting to find more parallelism. However, it is not useful to subdivide tasks into units which take less than a week or two to execute. Very fine subdivision means that a disproportionate amount of time must be spent on preparing and revising various charts.


Fig. 3.7: Work breakdown structure of an MIS problem
54. Q: How network structure is constructed? What is a critical path method?
Answer:
WBS representation of a project is transformed into an activity network by representing activities identified in WBS along with their interdependencies. An activity network shows the different activities making up a project, their estimated durations, and interdependencies (as shown in fig. 3.8). Each activity is represented by a rectangular node and the duration of the activity is shown alongside each task.
Fig. 3.8: Activity network representation of the MIS problem

Managers can estimate the time durations for the different tasks in several ways. One possibility is that they can empirically assign durations to different tasks. This however is not a good idea, because software engineers often resent such unilateral decisions. A possible alternative is to let engineer himself estimate the time for an activity he can assigned to. However, some managers prefer to estimate the time for various activities themselves. Many managers believe that an aggressive schedule motivates the engineers to do a better and faster job. However, careful experiments have shown that unrealistically aggressive schedules not only cause engineers to compromise on intangible quality aspects, but also are a cause for schedule delays. A good way to achieve accurately in estimation of the task durations without creating undue schedule pressures is to have people set their own schedules.
A critical task is one with a zero slack time. A path from the start node to the finish node containing only critical tasks is called a critical path. A critical path is the chain of activities that determines the duration of the project.

55. Q: For the activity diagram shown as in figure 3.8, use the CPM to find out the critical path?
Answer:
From the activity network representation following analysis can be made. The minimum time (MT) to complete the project is the maximum of all paths from start to finish. The earliest start (ES) time of a task is the maximum of all paths from the start to the task. The latest start time (LS) is the difference between MT and the maximum of all paths from this task to the finish. The earliest finish time (EF) of a task is the sum of the earliest start time of the task and the duration of the task. The latest finish (LF) time of a task can be obtained by subtracting maximum of all paths from this task to finish from MT. The slack time (ST) is (LF – EF) and equivalently can be written as (LS – ES). The slack time (or float time) is the total time that a task may be delayed before it will affect the end time of the project. The slack time indicates the “flexibility” in starting and completion of tasks. A critical task is one with a zero slack time. A path from the start node to the finish node containing only critical tasks is called a critical path. These parameters for different tasks for the MIS problem are shown in the following table. So,

Task
ES
EF
LS
LF
ST
Specification
0
15
0
15
0
Design database
15
60
15
60
0
Design GUI part
15
45
90
120
75
Code database
60
165
60
165
0
Code GUI part
45
90
120
165
75
Integrate and test
165
285
165
285
0
Write user manual
15
75
225
285
210

So, the critical path is represented by the dark line in the fig.

56. Q: What is Grantt Chart? Draw the Grantt Chart for the above network Activity Diagram as shown in the figure 3.8?
Answer:
Gantt charts are mainly used to allocate resources to activities. The resources allocated to activities include staff, hardware, and software. A Gantt chart is a special type of bar chart where each bar represents an activity. The bars are drawn along a time line. The length of each bar is proportional to the duration of time planned for the corresponding activity.
Gantt charts are used in software project management are actually an enhanced version of the standard Gantt charts. In the Gantt charts used for software project management, each bar consists of a white part and a shaded part. The shaded part of the bar shows the length of time each task is estimated to take. The white part shows the slack time, that is, the latest time by which a task must be finished.
The Grantt Chart for the activity network diagram in fig 3.8 is as below:

57. Q: What is PERT Chart? Why it is used?
Answer:
PERT (Project Evaluation and Review Technique) charts consist of a network of boxes and arrows. The boxes represent activities and the arrows represent task dependencies. PERT chart represents the statistical variations in the project estimates assuming a normal distribution. Thus, in a PERT chart instead of making a single estimate for each task, pessimistic, likely, and optimistic estimates are made. The boxes of PERT charts are usually annotated with the pessimistic, likely, and optimistic estimates for every task. Since all possible completion times between the minimum and maximum duration for every task has to be considered, there are not one but many critical paths, depending on the permutations of the estimates for each task. This makes critical path analysis in PERT charts very complex. A critical path in a PERT chart is shown by using thicker arrows. The PERT chart representation of the MIS problem of fig. 11.8 is shown in fig. 11.10. PERT charts are a more sophisticated form of activity chart. In activity diagrams only the estimated task durations are represented. Since, the actual durations might vary from the estimated durations, the utility of the activity diagrams are limited.
Gantt chart representation of a project schedule is helpful in planning the utilization of resources, while PERT chart is useful for monitoring the timely progress of activities. Also, it is easier to identify parallel activities in a project using a PERT chart. Project managers need to identify the parallel activities in a project for assignment to different engineers.
Fig. 11.10:



58. Q: What do you mean by Organization Structure? What are Different Organizational Formats? Differentiate them.
                                                            Or
What do you mean by Functional Format and Project Format?? Differentiate them.
Answer:
Usually every software development organization handles several projects at any time. Software organizations assign different teams of engineers to handle different software projects. Each type of organization structure has its own advantages and disadvantages so the issue “how is the organization as a whole structured?” must be taken into consideration so that each software project can be finished before its deadline.

Functional format vs. project format
There are essentially two broad ways in which a software development organization can be structured: functional format and project format. In the project format, the project development staff are divided based on the project for which they work (as shown in fig. 12.1). In the functional format, the development staff are divided based on the functional group to which they belong. The different projects borrow engineers from the required functional groups for specific phases to be undertaken in the project and return them to the functional group upon the completion of the phase.

In the functional format, different teams of programmers perform different phases of a project. For example, one team might do the requirements specification, another do the design, and so on. The partially completed product passes from one team to another as the project evolves. Therefore, the functional format requires considerable communication among the different teams because the work of one team must be clearly understood by the subsequent teams working on the project. This requires good quality documentation to be produced after every activity.

In the project format, a set of engineers is assigned to the project at the start of the project and they remain with the project till the completion of the project. Thus, the same team carries out all the life cycle activities. Obviously, the functional format requires more communication among teams than the project format, because one team must understand the work done by the previous teams.

Advantages of functional organization over project organization
Even though greater communication among the team members may appear as an avoidable overhead, the functional format has many advantages. The main advantages of a functional organization are:
• Ease of staffing
• Production of good quality documents
• Job specialization
• Efficient handling of the problems associated with manpower turnover.


59. Q: What do you mean by Team Structure? What are Different Team Formats? Differentiate them.
                                                            Or
What do you mean by Democratic and Mixed Team Structrure? Differentiate them.
Answer:
Team structure addresses the issue of organization of the individual project teams. There are some possible ways in which the individual project teams can be organized. There are mainly three formal team structures: chief programmer, democratic, and the mixed team organizations although several other variations to these structures are possible. Problems of different complexities and sizes often require different team structures for chief solution.

Chief Programmer Team
In this team organization, a senior engineer provides the technical leadership and is designated as the chief programmer. The chief programmer partitions the task into small activities and assigns them to the team members. He also verifies and integrates the products developed by different team members. The structure of the chief programmer team is shown in fig. 12.2. The chief programmer provides an authority, and this structure is arguably more efficient than the democratic team for well-understood problems. However, the chief programmer team leads to lower team morale, since team-members work under the constant supervision of the chief programmer. This also inhibits their original thinking. The chief programmer team is subject to single point failure since too much responsibility and authority is assigned to the chief programmer.
The chief programmer team is probably the most efficient way of completing simple and small projects since the chief programmer can work out a satisfactory design and ask the programmers to code different modules of his design solution. For example, suppose an organization has successfully completed many simple MIS projects. Then, for a similar MIS project, chief programmer team structure can be adopted. The chief programmer team structure works well when the task is within the intellectual grasp of a single individual. However, even for simple and well-understood problems, an organization must be selective in adopting the chief programmer structure. The chief programmer team structure should not be used unless the importance of early project completion outweighs other factors such as team morale, personal developments, life-cycle cost etc. Democratic Team
The democratic team structure, as the name implies, does not enforce any formal team hierarchy (as shown in fig. 12.3). Typically, a manager provides the administrative leadership. At different times, different members of the group provide technical leadership. The democratic organization leads to higher morale and job satisfaction. Consequently, it suffers from less man-power turnover. Also, democratic team structure is appropriate for less understood problems, since a group of engineers can invent better solutions than a single individual as in a chief programmer team. A democratic team structure is suitable for projects requiring less than five or six engineers and for research-oriented projects. For large sized projects, a pure democratic organization tends to become chaotic. The democratic team organization encourages egoless programming as programmers can share and review one another’s work.


Mixed Control Team Organization
The mixed team organization, as the name implies, draws upon the ideas from both the democratic organization and the chief-programmer organization. The mixed control team organization is shown pictorially in fig. 12.4. This team organization incorporates both hierarchical reporting and democratic set up. In fig. 12.4, the democratic connections are shown as dashed lines and the reporting structure is shown using solid arrows. The mixed control team organization is suitable for large team sizes. The democratic arrangement at the senior engineers level is used to decompose the problem into small parts. Each democratic setup at the programmer level attempts solution to a single part. Thus, this team organization is eminently suited to handle large and complex programs. This team structure is extremely popular and is being used in many software development companies.




60. What are the characteristics of a Good Software Engineer? Mention briefly.
Answer:
Characteristics of a good software engineer
The attributes that good software engineers should posses are as follows:
  • Exposure to systematic techniques, i.e. familiarity with software engineering principles.
  • Good technical knowledge of the project areas (Domain knowledge).
  • Good programming abilities.
  • Good communication skills. These skills comprise of oral, written, and interpersonal skills.
  • High motivation.
  • Sound knowledge of fundamentals of computer science.
  • Intelligence.
  • Ability to work in a team.
  • Discipline, etc.

61. What do you mean by a risk? How does the risk management technique works?
Answer:
A risk is any anticipated unfavorable event or circumstances that can occur while a project is underway. If a risk becomes real, it can adversely affect the project and hamper the successful and timely completion of the project.

Risk management
A software project can be affected by a large variety of risks. In order to be able to systematically identify the important risks which might affect a software project, it is necessary to categorize risks into different classes. The project manager can then examine which risks from each class are relevant to the project. There are three main categories of risks which can affect a software project:
  • Project risks. Project risks concern varies forms of budgetary, schedule, personnel, resource, and customer-related problems. An important project risk is schedule slippage. Since, software is intangible, it is very difficult to monitor and control a software project. It is very difficult to control something which cannot be seen. For any manufacturing project, such as manufacturing of cars, the project manager can see the product taking shape. He can for instance, see that the engine is fitted, after that the doors are fitted, the car is getting painted, etc. Thus he can easily assess the progress of the work and control it. The invisibility of the product being developed is an important reason why many software projects suffer from the risk of schedule slippage.
  • Technical risks. Technical risks concern potential design, implementation, interfacing, testing, and maintenance problems. Technical risks also include ambiguous specification, incomplete specification, changing specification, technical uncertainty, and technical obsolescence. Most technical risks occur due to the development team’s insufficient knowledge about the project.
·         Business risks. This type of risks include risks of building an excellent product that no one wants, losing budgetary or personnel commitments, etc.

Risk assessment
The objective of risk assessment is to rank the risks in terms of their damage causing potential. For risk assessment, first each risk should be rated in two ways:
• The likelihood of a risk coming true (denoted as r).
• The consequence of the problems associated with that risk (denoted as s).

Based on these two factors, the priority of each risk can be computed:
p = r * s
Where, p is the priority with which the risk must be handled, r is the probability of the risk becoming true, and s is the severity of damage caused due to the risk becoming true. If all identified risks are prioritized, then the most likely and damaging risks can be handled first and more comprehensive risk abatement procedures can be designed for these risks.

Risk containment
After all the identified risks of a project are assessed, plans must be made to contain the most damaging and the most likely risks. Different risks require different containment procedures. In fact, most risks require ingenuity on the part of the project manager in tackling the risk.
There are three main strategies to plan for risk containment:
  • Avoid the risk: This may take several forms such as discussing with the customer to change the requirements to reduce the scope of the work, giving incentives to the engineers to avoid the risk of manpower turnover, etc.
  • Transfer the risk: This strategy involves getting the risky component developed by a third party, buying insurance cover, etc.
  • Risk reduction: This involves planning ways to contain the damage due to a risk. For example, if there is risk that some key personnel might leave, new recruitment may be planned.

Risk leverage
To choose between the different strategies of handling a risk, the project manager must consider the cost of handling the risk and the corresponding reduction of risk. For this the risk leverage of the different risks can be computed.
Risk leverage is the difference in risk exposure divided by the cost of reducing the risk. More formally,
risk leverage = (risk exposure before reduction – risk exposure after reduction) / (cost of reduction)

62. What do you mean by a Software Configuration management? Why it is necessary? [B.E. 2010]
Answer:
The results (also called as the deliverables) of a large software development effort typically consist of a large number of objects, e.g. source code, design document, SRS document, test document, user’s manual, etc. These objects are usually referred to and modified by a number of software engineers through out the life cycle of the software. The state of all these objects at any point of time is called the configuration of the software product. The state of each deliverable object changes as development progresses and also as bugs is detected and fixed.

Necessity of software configuration management
There are several reasons for putting an object under configuration management. But, possibly the most important reason for configuration management is to control the access to the different deliverable objects. Unless strict discipline is enforced regarding updation and storage of different objects, several problems appear. The following are some of the important problems that appear if configuration management is not used.
  • Inconsistency problem when the objects are replicated.
  • Problems associated with concurrent access.
·         Providing a stable development environment.
  • System accounting and maintaining status information. System accounting keeps track of who made a particular change and when the change was made.
  • Handling variants.
63.  What do you mean by version, release and revision of a software product?
Answer:
A new version of software is created when there is a significant change in functionality, technology, or the hardware it runs on, etc. On the other hand a new revision of software refers to minor bug fix in that software. A new release is created if there is only a bug fix, minor enhancements to the functionality, usability, etc. For example, one version of a mathematical computation package might run on Unix-based machines, another on Microsoft Windows and so on. As software is released and used by the customer, errors are discovered that need correction. Enhancements to the functionalities of the software may also be needed. A new release of software is an improved system intended to replace an old one. Often systems are described as version m, release n; or simple m.n. Formally, a history relation is version of can be defined between objects. This relation can be split into two sub relations is revision of and is variant of.

64.  How Configuration Control is carried out? What are the different activities of Configuration controls?
Answer:
Configuration management is carried out through two principal activities:
Configuration identification involves deciding which parts of the system should be kept track of.
Configuration control ensures that changes to a system happen smoothly.

Configuration identification
Typical controllable objects include:
  • Requirements specification document
  • Design documents
  • Tools used to build the system, such as compilers, linkers, lexical analyzers, parsers, etc.
  • Source code for each module
  • Test cases
  • Problem reports

Configuration control
Configuration control is the process of managing changes to controlled objects. Configuration control is the part of a configuration management system that most directly affects the day-to-day operations of developers. The configuration control system prevents unauthorized changes to any controlled objects. In order to change a controlled object such as a module, a developer can get a private copy of the module by a reserve operation as shown in fig. 3.15. Configuration management tools allow only one person to reserve a module at a time. Once an object is reserved, it does not allow any one else to reserve this module until the reserved module is restored as shown in fig. 3.15. Thus, by preventing more than one engineer to simultaneously reserve a module, the problems associated with concurrent access are solved.

It can be shown how the changes to any object that is under configuration control can be achieved. The engineer needing to change a module first obtains a private copy of the module through a reserve operation. Then, he carries out all necessary changes on this private copy. However, restoring the changed module to the system configuration requires the permission of a change control board (CCB). The CCB is usually constituted from among the development team members. For every change that needs to be carried out, the CCB reviews the changes made to the controlled object and certifies several things about the change:
1. Change is well-motivated.
2. Developer has considered and documented the effects of the change.
3. Changes interact well with the changes made by other developers.
4. Appropriate people (CCB) have validated the change, e.g. someone has tested the changed code, and has verified that the change is consistent with the requirement.

Fig. 3.15: Reserve and restore operation in configuration control

The change control board (CCB) sounds like a group of people. However, except for very large projects, the functions of the change control board are normally discharged by the project manager himself or some senior member of the development team. Once the CCB reviews the changes to the module, the project manager updates the old base line through a restore operation (as shown in fig. 12.5). A configuration control tool does not allow a developer to replace an object he has reserved with his local copy unless he gets an authorization from the CCB. By constraining the developers’ ability to replace reserved objects, a stable environment is achieved. Since a configuration management tool allows only one engineer to work on one module at any one time, problem of accidental overwriting is eliminated. Also, since only the manager can update the baseline after the CCB approval, unintentional changes are eliminated.

65.  What do you mean by Requirement Analysis and Specification?
Answer:
The goal of the requirement analysis and specification phase is to study the customer requirements and to systematically organize the requirements into a specification document. The requirement analysis and specification phase starts after the feasibility study is complete.

66.  What do you mean by SRS? What are its different components?
Answer:
The SRS document is the final outcome of the requirements analysis and specification phase.
The important parts of SRS document are:
·         Functional requirements of the system
·         Non-functional requirements of the system, and
·         Goals of implementation

Functional requirements:-
  • The functional requirements part discusses the functionalities required from the system. The system is considered to perform a set of high-level functions {fi}. The functional view of the system is shown in fig. 3.1. Each function fi of the system can be considered as a transformation of a set of input data (ii) to the corresponding set of output data (oi). The user can get some meaningful piece of work done using a high-level function.
Fig. 3.1: View of a system performing a set of functions

Nonfunctional requirements:-
  • Nonfunctional requirements deal with the characteristics of the system which can not be expressed as functions - such as the maintainability of the system, portability of the system, usability of the system, etc.
  • Nonfunctional requirements may include:
# reliability issues,
 # accuracy of results,
# human - computer interface issues,
 # constraints on the system implementation, etc

Goals of implementation:-
The goals of implementation part documents some general suggestions regarding development. These suggestions guide trade-off among design goals. The goals of implementation section might document issues such as revisions to the system functionalities that may be required in the future, new devices to be supported in the future, reusability issues, etc. These are the items which the developers might keep in their mind during development so that the developed system may meet some aspects that are not required immediately.





67.  What are the key properties of a good SRS?
Answer:

The important properties of a good SRS document are the following:
  • Concise. The SRS document should be concise and at the same time unambiguous, consistent, and complete. Verbose and irrelevant descriptions reduce readability and also increase error possibilities.
  • Structured. It should be well-structured. A well-structured document is easy to understand and modify. In practice, the SRS document undergoes several revisions to cope up with the customer requirements. Often, the customer requirements evolve over a period of time. Therefore, in order to make the modifications to the SRS document easy, it is important to make the document well-structured.
  • Black-box view. It should only specify what the system should do and refrain from stating how to do these. This means that the SRS document should specify the external behavior of the system and not discuss the implementation issues. The SRS document should view the system to be developed as black box, and should specify the externally visible behavior of the system. For this reason, the SRS document is also called the black-box specification of a system.
  • Conceptual integrity. It should show conceptual integrity so that the reader can easily understand it.
  • Response to undesired events. It should characterize acceptable responses to undesired events. These are called system response to exceptional conditions.
  • Verifiable. All requirements of the system as documented in the SRS document should be verifiable. This means that it should be possible to determine whether or not requirements have been met in an implementation.

68.  What are the key properties of a bad SRS?
Answer:

The important properties of a bad SRS document are the following:
  • Over specification. It restricts the freedom of the designer in arriving at the design solution.
  • Forward references. We should not refer to aspects that are discussed much later in the SRS document. It reduces reliability of the specification.
  • Wishful thinking. This type of problem concern description of aspects which would be difficult to implement.




69.  What is a decision tree? Give an example.
Answer:
A decision tree gives a graphic view of the processing logic involved in decision making and the corresponding actions taken. The edges of a decision tree represent conditions and the leaf nodes represent the actions to be performed depending on the outcome of testing the condition.
Example: -
Consider Library Membership Automation Software (LMS) where it should support the following three options:
􀂃 New member
􀂃 Renewal
􀂃 Cancel membership
New member option-
Decision: When the 'new member' option is selected, the software asks details about the member like the member's name, address, phone number etc.
Action: If proper information is entered then a membership record for the member is created and a bill is printed for the annual membership charge plus the security deposit payable.
Renewal option-
Decision: If the 'renewal' option is chosen, the LMS asks for the member's name and his membership number to check whether he is a valid member or not.
Action: If the membership is valid then membership expiry date is updated and the annual membership bill is printed, otherwise an error message is displayed.
Cancel membership option-
Decision: If the 'cancel membership' option is selected, then the software asks for member's name and his membership number.
Action: The membership is cancelled, a cheque for the balance amount due to the member is printed and finally the membership record is deleted from the database.
Decision tree representation of the above example - The following tree (fig. 3.4) shows the graphical representation of the above example. After getting information from the user, the system makes a decision and then performs the corresponding actions.

Fig. 3.4: Decision tree for LMS

70.  What is a decision table? Give an example.
Answer:
A decision table is used to represent the complex processing logic in a tabular or a matrix form. The upper rows of the table specify the variables or conditions to be evaluated. The lower rows of the table specify the actions to be taken when the corresponding conditions are satisfied. A column in a table is called a rule. A rule implies that if a condition is true, then the corresponding action is to be executed.
Example: -
Consider the previously discussed LMS example. The following decision table (fig. 3.5) shows how to represent the LMS problem in a tabular form. Here the table is divided into two parts, the upper part shows the conditions and the lower part shows what actions are taken. Each column of the table is a rule.

From the above table you can easily understand that, if the valid selection condition is false then the action taken for this condition is 'display error message'. Similarly, the actions taken for other conditions can be inferred from the table.

71.  What is a formal language specication? Give an example.
Answer:
Formal technique
A formal technique is a mathematical method to specify a hardware and/or software system, verify whether a specification is realizable, verify that an implementation satisfies its specification, prove properties of a system without necessarily running the system, etc. The mathematical basis of a formal method is provided by the specification language.
Formal specification language
A formal specification language consists of two sets syn and sem, and a relation sat between them. The set syn is called the syntactic domain, the set sem is called the semantic domain, and the relation sat is called the satisfaction relation. For a given specification syn, and model of the system sem, if sat (syn, sem), as shown in fig. 3.6, then syn is said to be the specification of sem, and sem is said to be the specificand of syn.

Model-oriented vs. property-oriented approaches
Formal methods are usually classified into two broad categories – model – oriented and property – oriented approaches. In a model-oriented style, one defines a system’s behavior directly by constructing a model of the system in terms of mathematical structures such as tuples, relations, functions, sets, sequences, etc.
In the property-oriented style, the system's behavior is defined indirectly by stating its properties, usually in the form of a set of axioms that the system must satisfy.
Example:-
Let us consider a simple producer/consumer example. In a property-oriented style, it is probably started by listing the properties of the system like: the consumer can start consuming only after the producer has produced an item, the producer starts to produce an item only after the consumer has consumed the last item, etc. A good example of a producer-consumer problem is CPU-Printer coordination. After processing of data, CPU outputs characters to the buffer for printing. Printer, on the other hand, reads characters from the buffer and prints them. The CPU is constrained by the capacity of the buffer, whereas the printer is constrained by an empty buffer. Examples of property-oriented specification styles are axiomatic specification and algebraic specification.
In a model-oriented approach, we start by defining the basic operations, p (produce) and c (consume). Then we can state that S1 + p → S, S + c → S1. Thus the model-oriented approaches essentially specify a program by writing another, presumably simpler program. Examples of popular model-oriented specification techniques are Z, CSP, CCS, etc.
Model-oriented approaches are more suited to use in later phases of life cycle because here even minor changes to a specification may lead to drastic changes to the entire specification. They do not support logical conjunctions (AND) and disjunctions (OR).
Property-oriented approaches are suitable for requirements specification because they can be easily changed. They specify a system as a conjunction of axioms and you can easily replace one axiom with another one.

72.  What are the merits of formal requirements specification.
Answer:
Merits of formal requirements specification
Formal methods possess several positive features, some of which are discussed below.
• Formal specifications encourage rigour. Often, the very process of construction of a rigorous specification is more important than the formal specification itself. The construction of a rigorous specification clarifies several aspects of system behavior that are not obvious in an informal specification.
• Formal methods usually have a well-founded mathematical basis. Thus, formal specifications are not only more precise, but also mathematically sound and can be used to reason about the properties of a specification and to rigorously prove that an implementation satisfies its specifications.
• Formal methods have well-defined semantics. Therefore, ambiguity in specifications is automatically avoided when one formally specifies a system.
• The mathematical basis of the formal methods facilitates automating the analysis of specifications. For example, a tableau-based technique has been used to automatically check the consistency of specifications. Also, automatic theorem proving techniques can be used to verify that an implementation satisfies its specifications. The possibility of automatic verification is one of the most important advantages of formal methods.

73.  What is axiomatic specification? Give example.
Answer:
Axiomatic specification
In axiomatic specification of a system, first-order logic is used to write the pre and post-conditions to specify the operations of the system in the form of axioms. The pre-conditions basically capture the conditions that must be satisfied before an operation can successfully be invoked. In essence, the pre-conditions capture the requirements on the input parameters of a function. The post-conditions are the conditions that must be satisfied when a function completes execution for the function to be considered to have executed successfully. Thus, the post-conditions are essentially constraints on the results produced for the function execution to be considered successful.
The following are the sequence of steps that can be followed to systematically develop the axiomatic specifications of a function:
• Establish the range of input values over which the function should behave correctly. Also find out other constraints on the input parameters and write it in the form of a predicate.
• Specify a predicate defining the conditions which must hold on the output of the function if it behaved properly.
  • Establish the changes made to the function’s input parameters after execution of the function. Pure mathematical functions do not change their input and therefore this type of assertion is not necessary for pure functions.
• Combine all of the above into pre and post conditions of the function.
Example1: -
Specify the pre- and post-conditions of a function that takes a real number as argument and returns half the input value if the input is less than or equal to 100, or else returns double the value.
f (x : real) : real
pre : x R
post : {(x≤100) (f(x) = x/2)} {(x>100) (f(x) = 2x)}

74.  Identify the requirements of algebraic specifications in order to define a system.
Answer:
In the algebraic specification technique an object class or type is specified in terms of
relationships existing between the operations defined on that type. Various notations of
algebraic specifications have evolved, including those based on OBJ and Larch
languages. Essentially, algebraic specifications define a system as a heterogeneous
algebra. A heterogeneous algebra is a collection of different sets on which several
operations are defined. Traditional algebras are homogeneous. A homogeneous algebra
consists of a single set and several operations; {I, +, -, *, /}. In contrast, alphabetic
strings together with operations of concatenation and length {A, I, con, len}, is not a
homogeneous algebra, since the range of the length operation is the set of integers. To
define a heterogeneous algebra, firstly it is needed to specify its signature, the involved
operations, and their domains and ranges. Using algebraic specification, it can be
easily defined the meaning of a set of interface procedure by using equations. An
algebraic specification is usually presented in four sections.
Types section:- In this section, the sorts (or the data types) being used is specified. Exceptions section:- This section gives the names of the exceptional conditions that might occur when different operations are carried out. These exception conditions are used in the later sections of an algebraic specification. Syntax section:- This section defines the signatures of the interface procedures. The collection of sets that form input domain of an operator and the sort where the output is produced are called the signature of the operator. For example, PUSH takes a stack and an element and returns a new stack.
stack x element stack
Equations section:- This section gives a set of rewrite rules (or equations) defining the meaning of the interface procedures in terms of each other. In general, this section is allowed to contain conditional expressions. By convention each equation is implicitly universally quantified over all possible values of the variables. Names not mentioned in the syntax section such ‘r’ or ‘e’ is variables. The first step in defining an algebraic specification is to identify the set of required operations. After having identified the required operators, it is helpful to classify them as either basic constructor operators, extra constructor operators, basic inspector operators, or extra inspection operators. The definition of these categories of operators is as follows:
  • Basic construction operators. These operators are used to create or modify entities of a type. The basic construction operators are essential to generate all possible element of the type being specified. For example, ‘create’ and ‘append’ are basic construction operators in a FIFO queue.
  • Extra construction operators. These are the construction operators other than the basic construction operators. For example, the operator ‘remove’ is an extra construction operator in a FIFO queue because even without using ‘remove’, it is possible to generate all values of the type being specified.
  • Basic inspection operators. These operators evaluate attributes of a type without modifying them, e.g., eval, get, etc. Let S be the set of operators whose range is not the data type being specified. The set of the basic operators S1 is a subset of S, such that each operator from S-S1 can be expressed in terms of the operators from S1.
  • Extra inspection operators. These are the inspection operators that are not basic inspectors.

Example:-
Let us specify a data type point supporting the operations create, xcoord, ycoord, isequal; where the operations have their usual meaning.
Types:
defines point
uses boolean, integer
Syntax:
1. create : integer × integer → point
2. xcoord : point → integer
3. ycoord : point → integer
4. isequal : point × point → boolean

Equations:
1. xcoord(create(x, y)) = x
2. ycoord(create(x, y)) = y
3. isequal(create(x1, y1), create(x2, y2)) = ((x1 = x2) and (y1 = y2))

In this example, there is only one basic constructor (create), and three basic inspectors (xcoord, ycoord, and isequal). Therefore, there are only 3 equations.
75.  What are the characteristics of good software design?
Answer:
The characteristics are listed below:
  • Correctness: A good design should correctly implement all the functionalities identified in the SRS document.
  • Understandability: A good design is easily understandable.
  • Efficiency: It should be efficient.
  • Maintainability: It should be easily amenable to change.
Possibly the most important goodness criterion is design correctness. A design has to be correct to be acceptable. Given that a design solution is correct, understandability of a design is possibly the most important issue to be considered while judging the goodness of a design. A design that is easy to understand is also easy to develop, maintain and change. Thus, unless a design is easily understandable, it would require tremendous effort to implement and maintain it.
76.  What is Cohesion? What are different type of cohesions?
Answer:
Most researchers and engineers agree that a good software design implies clean decomposition of the problem into modules, and the neat arrangement of these modules in a hierarchy. The primary characteristics of neat module decomposition are high cohesion and low coupling. Cohesion is a measure of functional strength of a module. A module having high cohesion and low coupling is said to be functionally independent of other modules. By the term functional independence, we mean that a cohesive module performs a single task or function. A functionally independent module has minimal interaction with other modules

Classification of cohesion
The different classes of cohesion that a module may possess are depicted in fig. 4.1.
  • Coincidental cohesion: A module is said to have coincidental cohesion, if it performs a set of tasks that relate to each other very loosely, if at all. In this case, the module contains a random collection of functions. It is likely that the functions have been put in the module out of pure coincidence without any thought or design. For example, in a transaction processing system (TPS), the get-input, print-error, and summarize-members functions are grouped into one module. The grouping does not have any relevance to the structure of the problem.
  • Logical cohesion: A module is said to be logically cohesive, if all elements of the module perform similar operations, e.g. error handling, data input, data output, etc. An example of logical cohesion is the case where a set of print functions generating different output reports are arranged into a single module.
  • Temporal cohesion: When a module contains functions that are related by the fact that all the functions must be executed in the same time span, the module is said to exhibit temporal cohesion. The set of functions responsible for initialization, start-up, shutdown of some process, etc. exhibit temporal cohesion.
  • Procedural cohesion: A module is said to possess procedural cohesion, if the set of functions of the module are all part of a procedure (algorithm) in which certain sequence of steps have to be carried out for achieving an objective, e.g. the algorithm for decoding a message.
  • Communicational cohesion: A module is said to have communicational cohesion, if all functions of the module refer to or update the same data structure, e.g. the set of functions defined on an array or a stack.
  • Sequential cohesion: A module is said to possess sequential cohesion, if the elements of a module form the parts of sequence, where the output from one element of the sequence is input to the next. For example, in a TPS, the get-input, validate-input, sort-input functions are grouped into one module.
  • Functional cohesion: Functional cohesion is said to exist, if different elements of a module cooperate to achieve a single function. For example, a module containing all the functions required to manage employees’ pay-roll exhibits functional cohesion. Suppose a module exhibits functional cohesion and we are asked to describe what the module does, then we would be able to describe it using a single sentence.

77.  What is coupling? What are different types of cohesions?
Answer:
Coupling between two modules is a measure of the degree of interdependence or interaction between the two modules. A module having high cohesion and low coupling is said to be functionally independent of other modules. If two modules interchange large amounts of data, then they are highly interdependent. The degree of coupling between two modules depends on their interface complexity.
The interface complexity is basically determined by the number of types of parameters that are interchanged while invoking the functions of the module.

Classification of Coupling
Even if there are no techniques to precisely and quantitatively estimate the coupling between two modules, classification of the different types of coupling will help to quantitatively estimate the degree of coupling between two modules. Five types of coupling can occur between any two modules. This is shown in fig. 4.2.
  • Data coupling: Two modules are data coupled, if they communicate through a parameter. An example is an elementary data item passed as a parameter between two modules, e.g. an integer, a float, a character, etc. This data item should be problem related and not used for the control purpose.
  • Stamp coupling: Two modules are stamp coupled, if they communicate using a composite data item such as a record in PASCAL or a structure in C.
  • Control coupling: Control coupling exists between two modules, if data from one module is used to direct the order of instructions execution in another. An example of control coupling is a flag set in one module and tested in another module.
  • Common coupling: Two modules are common coupled, if they share data through some global data items.
  • Content coupling: Content coupling exists between two modules, if they share code, e.g. a branch from one module into another module.


78.  What do you mean by functional interdependency? Why it is needed?
Answer:
A module having high cohesion and low coupling is said to be functionally independent of other modules. By the term functional independence, we mean that a cohesive module performs a single task or function. A functionally independent module has minimal interaction with other modules.

Functional independence is a key to any good design due to the following reasons:
Error isolation: Functional independence reduces error propagation. The reason behind this is that if a module is functionally independent, its degree of interaction with the other modules is less. Therefore, any error existing in a module would not directly effect the other modules.
Scope of reuse: Reuse of a module becomes possible. Because each module does some well-defined and precise function, and the interaction of the module with the other modules is simple and minimal. Therefore, a cohesive module can be easily taken out and reused in a different program.
Understandability: Complexity of the design is reduced, because different modules can be understood in isolation as modules are more or less independent of each other.

79.  Differentiate between function oriented and object oriented design.
Answer:
  • The following are some of the important differences between function-oriented and object-oriented design. unlike function-oriented design methods, in OOD, the basic abstraction are not real-world functions such as sort, display, track, etc, but real-world entities such as employee, picture, machine, radar system, etc.
  • In OOD, state information is not represented in a centralized shared memory but is distributed among the objects of the system.
  • Function-oriented techniques such as SA/SD group functions together if, as a group, they constitute a higher-level function. On the other hand, object-oriented techniques group functions together on the basis of the data they operate on.

80.  Identify three least salient features of an object-oriented design approach.
Answer.: - In the object-oriented design approach, the system is viewed as collection of objects (i.e. entities). The state is decentralized among the objects and each object manages its own state information. For example, in a Library Automation Software, each library member may be a separate object with its own data and functions to operate on these data. In fact, the functions defined for one object cannot refer or change data of other objects. Objects have their own internal data which define their state. Similar objects constitute a class. In other words, each object is a member of some class. Classes may inherit features from super class. Conceptually, objects communicate by message passing.

81.  What is DFD? What are different elements of DFD.
Answer.: - The DFD (also known as a bubble chart) is a hierarchical graphical model of a system that shows the different processing activities or functions that the system performs and the data interchange among these functions. Each function is considered as a processing station (or process) that consumes some input data and produces some output data. The system is represented in terms of the input data to the system, various processing carried out on these data, and the output data generated by the system. A DFD model uses a very limited number of primitive symbols [as shown in fig. 5.1(a)] to represent the functions performed by a system and the data flow among these functions.

The different elements are:

82.  When a DFD is said to be a Synchronous?
Answer:
When two bubbles are directly connected with a data flow arrow then they are termed as synchronous DFD.

83.  When a DFD is said to be a balanced DFD?
Answer:
The data that flow into or out of a bubble must match the data flow at the next level of DFD. This is known as balancing a DFD.

84.  what do you mean by the data dictionary of a DFD?
Answer:
A data dictionary lists all data items appearing in the DFD model of a system. The data items listed include all data flows and the contents of all data stores appearing on the DFDs in the DFD model of a system. A data dictionary lists the purpose of all data items and the definition of all composite data items in terms of their component data items. For example, a data dictionary entry may represent that the data grossPay consists of the components regularPay and overtimePay.
grossPay = regularPay + overtimePay

85.  A software system called RMS calculating software would read three integral numbers from the user in the range of -1000 and +1000 and then determine the root mean square (rms) of the three input numbers and display it. Draw the DFD for this software?
Answer:


1 comment: