Master
of Computer Application (MCA) – Semester 5
MC0084
– Software Project Management &
Quality
Assurance – 4 Credits
(Book
ID: B0958 & B0959)
Assignment
Set – 1 (60 Marks)
______________________________________________________________________________
1.
What is project management? Explain various activities involved in
project management.
Ans:
Project
management is a systematic method of defining and achieving targets
with optimized use of resources such as time, money, manpower,
material, energy, and space. It is an application of knowledge,
skills, resources, and techniques to meet project requirements.
Project management involves various activities, which are as follows:
- Work planning
- Resource estimation
- Organizing the work
- Acquiring recourses such as manpower, material, energy, and space
- Risk assessment
- Task assigning
- Controlling the project execution
- Reporting the progress
- Directing the activities
- Analyzing the results
2.
Describe the following with respect to Estimation and Budgeting of
Projects:
a.
Software Cost Estimation and Methods
b.
COCOMO model and its variations
Ans:
a) Software Cost
Estimation and Methods
A number of methods
have been used to estimate software cost.
Algorithmic Models
These methods provide
one or more algorithms which produce a software cost estimate as a
function of a number of variables which relate to some software
metric (usually its size) and cost drivers.
Expert Judgment
This method involves
consulting one or more experts, perhaps with the aid of an
expert-consensus mechanism such as the Delphi technique
Analogy Estimation
This method involves
reasoning by analogy with one or more completed projects to relate
their actual costs to an estimate of the cost of a similar new
project.
Top-Down Estimation
An overall cost estimate for the project is derived from global
properties of the software product. The total cost is then split up
among the various components.
Bottom-Up Estimation
Each
component of the software job is separately estimated, and the
results aggregated to produce an estimate for the overall job.
Parkinson's Principle
A
Parkinson principle ('Work expands to fill the available volume")
is invoked to equate the cost estimate to the available resources.
Price to Win
The cost estimation
developed by this method is equated to the price believed necessary
to win the job. The estimated effort depends on the customer's budget
and not on the software functionality
Bottom-Up Estimation
Each
component of the software job is separately estimated,and the results
aggregated to produce an estimate for the overall job.
Cost Estimation
Guidelines
Assign
the initial estimating task to the final developers.
Delay
finalizing the initial estimate until the end of a thorough study.
Anticipate
and control user changes.
Monitor
the progress of the proposed project.
Evaluate
proposed project progress by using independent auditors.
Use
the estimate to evaluate project personnel.
Computing
management should carefully approve the cost estimate.
Rely
on documented facts, standards, and simple arithmetic formulas rather
than guessing, intuition, personal memory, and complex formulas.
Don't
rely on cost estimating software for an accurate estimate.
b)
COCOMO model and its variations
The
Constructive Cost Model (COCOMO) is an algorithmic software cost
estimation model developed by Barry Boehm. The model uses a basic
regression formula, with parameters that are derived from historical
project data and current project characteristics.
COCOMO
was first published in 1981 Barry W. Boehm's Book Software
engineering economics[1] as a model for estimating effort, cost, and
schedule for software projects. It drew on a study of 63 projects at
TRW Aerospace where Barry Boehm was Director of Software Research and
Technology in 1981. The study examined projects ranging in size from
2,000 to 100,000 lines of code, and programming languages ranging
from assembly to PL/I. These projects were based on the waterfall
model of software development which was the prevalent software
development process in 1981.
References
to this model typically call it COCOMO 81. In 1997 COCOMO II was
developed and finally published in 2000 in the book Software Cost
Estimation with COCOMO II[2]. COCOMO II is the successor of COCOMO 81
and is better suited for estimating modern software development
projects. It provides more support for modern software development
processes and an updated project database. The need for the new model
came as software development technology moved from mainframe and
overnight batch processing to desktop development, code reusability
and the use of off-the-shelf software components. This article refers
to COCOMO 81.
COCOMO
consists of a hierarchy of three increasingly detailed and accurate
forms. The first level, Basic COCOMO is good for quick, early, rough
order of magnitude estimates of software costs, but its accuracy is
limited due to its lack of factors to account for difference in
project attributes (Cost Drivers). Intermediate COCOMO takes these
Cost Drivers into account and Detailed COCOMO additionally accounts
for the influence of individual project phases.
The
Constructive Cost Model (COCOMO) is an algorithmic software cost
estimation model developed by Barry Boehm. The model uses a basic
regression formula, with parameters that are derived from historical
project data and current project characteristics.
COCOMO
was first published in 1981 Barry W. Boehm's Book Software
engineering economics[1] as a model for estimating effort, cost, and
schedule for software projects. It drew on a study of 63 projects at
TRW Aerospace where Barry Boehm was Director of Software Research and
Technology in 1981. The study examined projects ranging in size from
2,000 to 100,000 lines of code, and programming languages ranging
from assembly to PL/I. These projects were based on the waterfall
model of software development which was the prevalent software
development process in 1981.
References
to this model typically call it COCOMO 81. In 1997 COCOMO II was
developed and finally published in 2000 in the book Software Cost
Estimation with COCOMO II[2]. COCOMO II is the successor of COCOMO 81
and is better suited for estimating modern software development
projects. It provides more support for modern software development
processes and an updated project database. The need for the new model
came as software development technology moved from mainframe and
overnight batch processing to desktop development, code reusability
and the use of off-the-shelf software components. This article refers
to COCOMO 81.
COCOMO
consists of a hierarchy of three increasingly detailed and accurate
forms. The first level, Basic COCOMO is good for quick, early, rough
order of magnitude estimates of software costs, but its accuracy is
limited due to its lack of factors to account for difference in
project attributes (Cost Drivers). Intermediate COCOMO takes these
Cost Drivers into account and Detailed COCOMO additionally accounts
for the influence of individual project phases.Basic COCOMO computes
software development effort (and cost) as a function of program size.
Program size is expressed in estimated thousands of lines of code
(KLOC).
COCOMO
applies to three classes of software projects:
*
Organic projects - "small" teams with "good"
experience working with "less than rigid" requirements
*
Semi-detached projects - "medium" teams with mixed
experience working with a mix of rigid and less than rigid
requirements
*
Embedded projects - developed within a set of "tight"
constraints (hardware, software, operational, ...)
The
basic COCOMO equations take the form
Effort
Applied = ab(KLOC)bb [ man-months ]
Development
Time = cb(Effort Applied)db [months]
People
required = Effort Applied / Development Time [count]
The
coefficients ab, bb, cb and db are given in the following table.
Software
project ab bb cb db
Organic
2.4 1.05 2.5 0.38
Semi-detached
3.0 1.12 2.5 0.35
Embedded
3.6 1.20 2.5 0.32
Basic
COCOMO is good for quick estimate of software costs. However it does
not account for differences in hardware constraints, personnel
quality and experience, use of modern tools and techniques, and so
on.
Intermediate
COCOMO computes software development effort as function of program
size and a set of "cost drivers" that include subjective
assessment of product, hardware, personnel and project attributes.
This extension considers a set of four "cost drivers",each
with a number of subsidiary attributes:-
*
Product attributes
Required
software reliability
Size
of application database
Complexity
of the product
*
Hardware attributes
Run-time
performance constraints
Memory
constraints
Volatility
of the virtual machine environment
Required
turnabout time
*
Personnel attributes
Analyst
capability
Software
engineering capability
Applications
experience
Virtual
machine experience
Programming
language experience
*
Project attributes
Use
of software tools
Application
of software engineering methods
Required
development schedule
3. What is project
scheduling? Explain different techniques for project scheduling.
Ans:
Project Scheduling
Project scheduling is
concerned with the techniques that can be employed to manage the
activities that need to be undertaken during the development of a
project.
Scheduling is carried
out in advance of the project commencing and involves:
• identifying the
tasks that need to be carried out;
• estimating how long
they will take;
• allocating
resources (mainly personnel);
• scheduling when the
tasks will occur.
Once the project is
underway control needs to be exerted to ensure that the plan
continues to represent the best prediction of what will occur in the
future:
• based on what
occurs during the development;
• often necessitates
revision of the plan.
Effective project
planning will help to ensure that the systems are delivered:
• within cost;
• within the time
constraint;
• to a specific
standard of quality.
Two project scheduling
techniques will be presented, the Milestone Chart (or Gantt Chart)
and the Activity Network.
Milestone Charts
Milestones mark
significant events in the life of a project, usually critical
activities which must be achieved on time to avoid delay in the
project.
Milestones should be
truly significant and be reasonable in terms of deadlines (avoid
using intermediate stages).
Examples include:
• installation of
equipment;
• completion of
phases;
• file conversion;
• cutover to the new
system
Gantt Charts
A Gantt chart is a
horizontal bar or line chart which will commonly include the
following features:
• activities
identified on the left hand side;
• time scale is drawn
on the top (or bottom) of the chart;
• a horizontal open
oblong or a line is drawn against each activity indicating estimated
duration;
• dependencies
between activities are shown;
• at a review point
the oblongs are shaded to represent the actual time spent (an
alternative is to represent actual and estimated by 2 separate
lines);
• a vertical cursor
(such as a transparent ruler) placed at the review point makes it
possible to establish activities which are behind or ahead of
schedule.
Activity Networks
The foundation of the
approach came from the Special Projects Office of the US Navy in
1958. It developed a technique for evaluating the performance of
large development projects, which became known as PERT - Project
Evaluation and Review Technique. Other variations of the same
approach are known as the critical path method (CPM) or critical path
analysis (CPA).
The heart of any PERT
chart is a network of tasks needed to complete a project, showing the
order in which the tasks need to be completed and the dependencies
between them.
This is represented
graphically:
EXAMPLE OF ACTIVITY
NETWORK
The diagram consists of
a number of circles, representing events within the development
lifecycle, such as the start or completion of a task, and lines,
which represent the tasks themselves. Each task is additionally
labelled by its time duration. Thus the task between events 4 & 5
is planned to take 3 time units. The primary benefit is the
identification of the critical path.
The critical path =
total time for activities on this path is greater than any other path
through the network (delay in any task on the critical path leads to
a delay in the project).
Tasks on the critical
path therefore need to be monitored carefully.
The technique can be
broken down into 3 stages:
1. Planning:
• identify tasks and
estimate duration of times;
• arrange in feasible
sequence;
• draw diagram.
2. Scheduling:
• establish timetable
of start and finish times.
3. Analysis:
- establish float;
- evaluate and revise as necessary.
4.
Explain the Mathematics in software development? Explain its
preliminaries also.
Ans:
Mathematics
in Software Development :
Mathematics
has many useful properties for the developers of large systems. One
of its most useful properties is that it is capable of succinctly and
exactly describing a physical situation, an object or the outcome of
an action. Ideally, the software engineer should be in the same
position as the applied mathematician. A mathematical specification
of a system should be presented, and a solution developed in terms of
a software architecture that implements the specification should be
produced. Another advantage of using mathematics in the software
process is that it provides a smooth transition between software
engineering activities. Not only functional specifications but also
system designs can be expressed in mathematics, and of course, the
program code is a mathematical notation – albeit a rather
long-winded one.
The major
property of mathematics is that it supports abstraction and is an
excellent medium for modeling. As it is an exact medium there is
little possibility of ambiguity: Specifications can be mathematically
validated for contradictions and incompleteness, and vagueness
disappears completely.
In
addition, mathematics can be used to represent levels of abstraction
in a system specification in an organized way. Mathematics is an
ideal tool for modeling. It enables the bare bones of a specification
to be exhibited and helps the analyst and system specifier to
validate a specification for functionality without intrusion of such
issues as response time, design directives, implementation
directives, and project constraints. It also helps the designer,
because the system design specification exhibits the properties of a
model, providing only sufficient details to enable the task in hand
to be carried out. Finally, mathematics provides a high level of
validation when it is used as a software development medium. It is
possible to use a mathematical proof to demonstrate that a design
matches a specification and that some program code is a correct
reflection of a design. This is preferable to current practice, where
often little effort is put into early validation and where much of
the checking of a software system occurs during system and acceptance
testing.
Mathematical
Preliminaries
To apply
formal methods effectively, a software engineer must have a working
knowledge of the mathematical notation associated with sets and
sequences and the logical notation used in predicate calculus. The
intent of the section is to provide a brief introduction. For a more
detailed discussion the reader is urged to examine books dedicated to
these subjects
Sets and
Constructive Specification
A set is a
collection of objects or elements and is used as a cornerstone of
formal methods. The elements contained within a set are unique (i.e.,
no duplicates are allowed). Sets with a small number of elements are
written within curly brackets (braces) with the elements separated by
commas. For example, the set {C++, Pascal, Ada, COBOL, Java} contains
the names of five programming languages. The order in which the
elements appear within a set is immaterial. The number of items in a
set is known as its cardinality. The # operator returns a
set's cardinality. For example, the expression #{A, B, C, D} =
4 implies that the cardinality operator has been applied to the set
shown with a result indicating the number of items in the set. There
are two ways of defining a set. A set may be defined by enumerating
its elements (this is the way in which the sets just noted have been
defined). The second approach is to create a constructive set
specification. The general form of the members of a set is
specified using a Boolean expression. Constructive set specification
is preferable to enumeration because it enables a succinct definition
of large sets. It also explicitly defines the rule that was used in
constructing the set. Consider the following constructive
specification example: {n : _ | n < 3 . n}
This specification has three components, a signature, n : _, a
predicate n < 3, and a term, n. The signature
specifies the range of values that will be considered when
forming the set, the predicate (a Boolean expression) defines
how the set is to be constricted, and, finally, the term gives
the general form of the item of the set. In the example above, _
stands for the natural numbers; therefore, natural numbers are to be
considered. The predicate indicates that only natural numbers less
than 3 are to be included; and the term specifies that each element
of the set will be of the form n.
Therefore,
this specification defines the set {0, 1, 2} When the form of the
elements of a set is obvious, the term can be omitted. For example,
the preceding set could be specified as (n : _ | n <
3} All the sets that have been described here have elements that are
single items. Sets can also be made from elements that are pairs,
triples, and so on. For example, the set specification {x, y :
_ | x + y = 10 . (x, y2)} describes the
set of pairs of natural numbers that have the form (x, y2) and
where the sum of x and y is 10. This is the set { (1,
81), (2, 64), (3, 49), . . .} Obviously, a constructive set
specification required to represent some component of computer
software can be considerably more complex than those noted here. How
ever the basic form and structure remains the same.
5.
What is debugging? Explain the basic steps in debugging?
Ans:
Debugging is a
methodical process of finding and reducing the number of bugs, or
defects, in a computer program or a piece of electronic hardware,
thus making it behave as expected. Debugging tends to be harder when
various subsystems are tightly coupled, as changes in one may cause
bugs to emerge in another. Many books have been written about
debugging (see below: Further reading), as it involves numerous
aspects, including: interactive debugging, control flow, integration
testing, log files, monitoring (application, system), memory dumps,
profiling, Statistical Process Control, and special design tactics to
improve detection while simplifying changes.
Step 1. Identify the
error.
This is an obvious step
but a tricky one, sometimes a bad identification of an error can
cause lots of wasted developing time, is usual that production errors
reported by users are hard to be interpreted and sometimes the
information we are getting from them is misleading.
A few tips to make sure
you identify correctly the bug are.
See the error. This is
easy if you spot the error, but not if it comes from a user, in that
case see if you can get the user to send you a few screen captures or
even use remote connection to see the error by yourself.
Reproduce the
error. You never should say that an error has been fixed if you were
not able to reproduce it.
Understand what the
expected behavior should be. In complex applications could be hard to
tell what should be the expected behavior of an error, but that
knowledge is basic to be able to fix the problem, so we will have to
talk with the product owner, check documentation… to find this
information
Validate the
identification. Confirm with the responsible of the application that
the error is actually an error and that the expected behavior is
correct. The validation can also lead to situations where is not
necessary or not worth it to fix the error.
Step 2. Find the
error.
Once we have an error
correctly identified, is time to go through the code to find the
exact spot where the error is located, at this stage we are not
interested in understanding the big picture for the error, we are
just focused on finding it. A few techniques that may help to find an
error are:
Logging. It can be
to the console, file… It should help you to trace the error in the
code.
Debugging.
Debugging in the most technical sense of the word, meaning turning on
whatever the debugger you are using and stepping through the code.
Removing code. I
discovered this method a year ago when we were trying to fix a very
challenging bug. We had an application which a few seconds after
performing an action was causing the system to crash but only on some
computers and not always but only from time to time, when debugging,
everything seemed to work as expected, and when the machine was
crashing it happened with many different patterns, we were completely
lost, and then it occurred to us the removing code approach. It
worked more or less like this:
We took out half of the
code from the action causing the machine to crash, and we executed it
hundreds of times, and the application crashed, we did the same with
the other half of the code and the application didn’t crash, so we
knew the error was on the first half, we kept splitting the code
until we found that the error was on a third party function we were
using, so we just decided to rewrite it by ourselves.
Step 3. Analyze the
error.
This is a critical
step, use a bottom-up approach from the place the error was found and
analyze the code so you can see the big picture of the error,
analyzing a bug has two main goals: to check that around that error
there aren’t any other errors to be found (the iceberg metaphor),
and to make sure what are the risks of entering any collateral damage
in the fix.
Step 4. Prove your
analysis
This is a straight
forward step, after analyzing the original bug you may have come with
a few more errors that may appear on the application, this step it’s
all about writing automated tests for these areas (is better to use a
test framework as any from the xUnit family).
Once you have your
tests, you can run them and you should see all them failing, that
proves that your analysis is right.
Step 5. Cover lateral
damage.
At this stage you are
almost ready to start coding the fix, but you have to cover your ass
before you change the code, so you create or gather (if already
created) all the unit tests for the code which is around where you
will do the changes so that you will be sure after completing the
modification that you won’t have break anything else. If you run
this unit tests, they all should pass.
Step 6. Fix the error.
That’s it, finally
you can fix the error!
Step 7. Validate the
solution.
Run all the test
scripts and check that they all pass.
6. What is a fish
bone diagram? How is it helpful to the project management?
Ans:
FISH BONE DIAGRAM
Ishikawa diagrams (also
called fishbone diagrams, cause-and-effect diagrams or Fishikawa) are
causal diagrams that show the causes of a certain event -- created by
Kaoru Ishikawa (1990).[1] Common uses of the Ishikawa diagram are
product design and quality defect prevention, to identify potential
factors causing an overall effect. Each cause or reason for
imperfection is a source of variation. Causes are usually grouped
into major categories to identify these sources of variation. The
categories typically include:
People: Anyone
involved with the process
Methods: How the
process is performed and the specific requirements for doing it, such
as policies, procedures, rules, regulations and laws
Machines: Any
equipment, computers, tools etc. required to accomplish the job
Materials: Raw
materials, parts, pens, paper, etc. used to produce the final product
Measurements: Data
generated from the process that are used to evaluate its quality
Environment: The
conditions, such as location, time, temperature, and culture in which
the process operates
Ishikawa diagrams were
proposed by Ishikawa [2] in the 1960s, who pioneered quality
management processes in the Kawasaki shipyards, and in the process
became one of the founding fathers of modern management.
It was first used in
the 1940s, and is considered one of the seven basic tools of quality
control.[3] It is known as a fishbone diagram because of its shape,
similar to the side view of a fish skeleton.
Mazda Motors famously
used an Ishikawa diagram in the development of the Miata sports car,
where the required result was "Jinba Ittai" or "Horse
and Rider as One". The main causes included such aspects as
"touch" and "braking" with the lesser causes
including highly granular factors such as "50/50 weight
distribution" and "able to rest elbow on top of driver's
door". Every factor identified in the diagram was included in
the final design.
Causes
Causes in the diagram
are often categorized, such as to the 8 M's, described below.
Cause-and-effect diagrams can reveal key relationships among various
variables, and the possible causes provide additional insight into
process behavior.
Causes can be derived
from brainstorming sessions. These groups can then be labeled as
categories of the fishbone. They will typically be one of the
traditional categories mentioned above but may be something unique to
the application in a specific case. Causes can be traced back to root
causes with the 5 Whys technique.
Typical categories are:
The 8 Ms (used in
manufacturing)
The 8 Ps (used in
service industry)
The 4 Ss (used in
service industry)
One may find it helpful
to use the Fishbone diagram in the following cases:
To analyze and find
the root cause of a complicated problem
When there are many
possible causes for a problem
If the traditional
way of approaching the problem (trial and error, trying all possible
causes, and so on) is very time consuming
The problem is very
complicated and the project team cannot identify the root cause
When not to use it
Of course, the Fishbone
diagram isn't applicable to every situation. Here are a just a few
cases in which you should not use the Fishbone diagram because the
diagrams either are not relevant or do not produce the expected
results:
The problem is
simple or is already known.
The team size is
too small for brainstorming.
There is a
communication problem among the team members.
There is a time
constraint; all or sufficient headcount is not available for
brainstorming.
The team has
experts who can fix any problem without much difficulty.
For any issue write @ rsravishri30@gmail.com
No comments:
Post a Comment