This is one of the estimation methodologies called 'MVC points' that was created to estimate J2EE and .Net applications. I have uploaded a .ppt file for the same also and this is a full paper.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
MVC Points – a new estimation methodology for web applications
MVC Points – a new estimation methodology for web applications
PML 2005 Nagaraja Gundappa, Wipro Technologies 2
Abstract
As technology and development methodologies change, the older effort estimation methodologies become
less applicable or nearly irrelevant. When it comes to web application development using Object Oriented
Software Engineering methodology, it is tedious if not impossible to use older estimation methodologies
such as COCOMO or Function Points. One of the main reasons for the estimation methodologies to
become obsolete is that the methodologies are closed in the sense that the users cannot easily tailor or
extend the methodology. This paper defines a new paradigm of openness in estimation methodologies and
describes one estimation methodology, MVC Points, its usage in detail. This methodology is open and
users can tailor or extend this methodology. This methodology has been developed indigenously and
originally by the J2EE Centre Of Excellence at Wipro Technologies The initial results of piloting this
methodology are very encouraging and the methodology has started gaining acceptance.
MVC Points – a new estimation methodology for web applications
PML 2005 Nagaraja Gundappa, Wipro Technologies 3
(1) Introduction
Estimating the size of a software application has always been a challenge and continues to remain so. When
an estimation technique gains acceptability and tends to become universal, the technology and environment
change so much that the assumptions made in the well accepted methodology would become incomplete
and at times invalid. Established methodologies become less accepted in the new context and new
estimation methodologies emerge. Specifically, when it comes to estimating web applications, the current
estimation methods have the following problems:
The Cocomo methodology provides a way of estimating the efforts given the size in Kilo Lines Of
Code (KLOC). However, it does not provide a methodology to estimate the size in KLOC.
KLOC as a sizing unit is not the most appropriate technique to size web applications. In event-
driven programming used for developing Graphical User Interfaces (GUI), lines of code is not the
best indicator of complexity involved in developing the GUI. Secondly, when both manual code
and tool-generated code are involved it requires different level of effort to develop the same.
Hence it becomes confusing to use KLOC as a sizing unit.
Due to the above complexities, estimating the size of an application in terms of KLOC, given the
requirements is too complex if not impossible.
The Function Points methodology has the following disadvantages:
o It is difficult to visualize a web application using the paradigm of function points
methodology. Function points views the size of an application in terms of data and
transactions and this is different from object oriented thinking in which an application is
viewed as a group of classes related and interacting with each other.
o There is no one-to-one mapping between the 5 elements of the methodology and the units
of code that actually get developed.
o The counting rules are too abstract and can be interpreted very differently by different
persons.
o As one function point does not directly translate into one unit of code that gets developed,
it is difficult to carry out a root cause analysis if there is a mismatch between the
estimated effort and actual effort.
The use case points methodology addresses some of the above limitations as they are based on use
cases and maps well to object oriented view of applications. However, the use case points
methodology has the following disadvantages:
o There is a large variation in the way use cases are written. If not balanced, an application
tends to have smaller number of large use cases and this can cause a large deviation in the
estimates.
o Use case to class mapping is one to many and it is extremely cumbersome to verify the
effort per use case based on actual experience.
o Productivity norms per use case point has a degree of empiricalness and hence
confidence level on the estimates would be low.
As a result of the above limitations, no estimation methodology has become universally accepted and
experience based free-format estimation continues in large scale.
The above observations can be summarized and ascribed to the paradigms that these methodologies are
based on and a new paradigm is needed to develop new estimation methodologies.
(1.1) Current paradigm in estimation methodologies – Generic and closed
The current paradigm in estimation methodologies is to develop methodologies that are generic, applicable
to all technologies and handle technology specific complexities through adjustment factors. For instance,
the function points (FP) methodology was developed during the era of stand-alone applications but has
been abstracted and made applicable to estimating the size of any software. The FP methodology has
adjustment factors called general system characteristics that addresses the technology-specific complexities
of implementing the same functionality. The Use case points (UP) methodology is also considered generic
and has what are called Technical Complexity Factors and Environmental Complexity Factors to handle
technology specific variations. Another new estimation methodology called Class Points is also similar in
nature.
MVC Points – a new estimation methodology for web applications
PML 2005 Nagaraja Gundappa, Wipro Technologies 4
In general, the paradigm of generic and closed estimation methodologies can be summarized as follows:
1. Views the application top down. That is, visualize the application at the highest abstraction level
either in terms of functionality or in terms of use cases.
2. Define adjustment factors to take care of technology and development environment specific
variations. These adjustment factors have empirical numbers to add or multiply the size.
3. Effort norms are per abstracted sizing units
It is due to the above paradigm that these estimation methodologies have still not become universally
accepted. Specifically, following factors pose a hurdle for a gradual maturity of these methodologies -
1. Because sizing units are generic and abstract, validation of effort norms is tedious and
approximate. That is, the sizing unit does not directly correspond to any coding unit that is
developed and hence a unit-level validation of estimates is not possible. For instance, a function
point or a usecase point does not correspond directly to a given set of classes. The relationship
between function points (or use case points) and classes that get developed are many-to-many.
2. Adjustment factors involves empirical numbers and this makes the methodology closed. That is,
if an application involves complexities not listed among the adjustment factors, users cannot
modify or extend the adjustment factors. For instance, if an application has higher reliability
requirements, this complexity is not a factor that is listed in Function Points or Usecase points. A
user would not know how to tweak the adjustment factors as he would not know what empirical
number to assign to this new complexity factor.
Therefore, the current paradigm of generic and closedness does not allow organizations to adopt a
methodology and refine it continuously.
Hence, there is a need for a new paradigm of Open and Technology specific estimation methodologies.
Technology specific, because the sizing units can have a closer mapping to the code units developed and
hence it should be easier to validate effort norms. Open because, it would not be possible for the authors of
any methodology to foresee all possible complexity factors and list them. Users should be free to modify
or extend the complexity factors so that organizations can adopt them and refine continuously.
(1.2) Experience at Wipro Technologies
At Wipro Technologies, experience with estimating for web applications reflects the above observations.
Firstly, it is observed that experience-based free-format estimation has several drawbacks. Most
significantly, it was found that there was no standard way to identify the work-break-down elements of an
application and hence lessons learnt from one estimation exercise could not be leveraged and used as
heuristics for the next estimation. Secondly, it has been found that usage of estimation methodologies such
as Function Points and COCOMO for web application is too tedius and cumbersome if not impossible for
reasons explained earlier in the section. Hence, it was decided to develop a family of estimation
methodologies based on the new paradigm. Developing an open estimation methodology would essentially
involve –
1. Standardization of work-break-down units of a specific technology
2. Defining complexity indicators
3. Defining effort norms for sizing units of different complexities.
To begin with, it was decided to develop methodologies for J2EE and .Net technologies. Called MVC
Points, this paper describes this new methodology. Rest of the paper is organized as follows:
Overview of the MVC Points methodology
Description of steps involved in the methodology
Illustration of the methodology for one use case
Results of initial usage of this methodology
Guidelines for adopting this methodology in an organization
Appendices
MVC Points – a new estimation methodology for web applications
PML 2005 Nagaraja Gundappa, Wipro Technologies 5
(2) MVC Points methodology
The Model-View-Controller pattern is quite popular and is a well accepted practice in developing J2EE and
.Net applications. Hence, standardization of work-break-down units for J2EE and .Net applications was
done based on the MVC pattern. That is, J2EE and .Net applications would be considered to be essentially
consisting of a number of views, controllers and models. Hence, the name MVC Points for this
methodology. A brief overview of MVC Pattern needed to appreciate this methodology is provided in the
next section.
(2.1) MVC Architectural pattern
Any interactive application can be categorized into:
View Classes that present GUIs
Model Classes that handle business logic and interact with database
Controller classes which communicate between the View and Model Classes
Typically in internet applications User interfaces change often, look-and-feel being a competitive
issue. The same information is presented in different ways; however, the core business logic and data
is stable. So model classes would not change often but view classes would. Hence, separating Model
from View (that is, separating data representation from presentation) results in the following
advantages
easy to add multiple data presentations for the same data.
facilitates adding new types of data presentation as technology develops.
Model and View components can vary independently enhancing maintainability, extensibility,
and testability
In a typical J2EE application, the Model, View and Controller would map as shown in the following
diagram:
Diagram 1: MVC Pattern in J2EE technology
The View (GUIs) is implemented by JSP (Java Server Page), Controller is implemented using servlets and
Model is implemented by EJB (Enterprise Java Beans). It should be noted here that the Model includes
the database as well as the helper classes to go with the EJBs.
In a .Net technology, the View is implemented by .ASP, Controller is implemented by Code behind and the
Model is implemented by COM+ components. The model includes the COM+ components as well as the
database.
(2.2) Overview of the methodology
The Paradigm behind this methodology is that the size of a J2EE or a .Net application can be expressed in
terms of the number of views, models and controllers developed. The methodology involves listing the
use cases of the application and for each use case, enumerating the models, views and controllers needed to
realize the use case. Each model, view and controller enumerated is classified as simple, medium or
Model View Controller
JSPs Servlets EJBs
DBMS
User
MVC Points – a new estimation methodology for web applications
PML 2005 Nagaraja Gundappa, Wipro Technologies 6
complex and then effort is calculated based on norms. The major components of the methodology are
depicted in the diagram below:
Diagram 2: Components and work flow of MVCPoints methodology
As shown in the diagram above, the MVC Points methodology essentially consists of a step-by-step
procedure and guidelines to be used in each step. The procedure is as follows:
1. Identify the use cases for the application.
2. For each use case, estimate the number of models, views and controllers required to realize the use
case. The methodology provides tips to appropriately identify the Models, Views and Controllers.
3. Eliminate duplicates. That is, for instance, if a view identified for one use case is required for
another use case also, count it only once.
4. Classify the identified models, views and controllers as simple, medium and complex. The
methodology provides what are called complexity indicators to classify the identified models,
views and controllers.
5. Apply effort norms to identified models, views and controllers. The MVC Points model
recommends effort norm to be defined only for the coding and unit testing (CUT) phase and use
this as the basis for determining the life cycle effort.
Simple
Medium
UC1
UC2
UCn
Classified MVCs
V
C
M
V
C
M
V
C
M Complex
Life
cycle
effort
for
MVCs
Effort
for
coding
and
unit
testing
Total
Development
Effort
Other
non-
MVC
Effort Miscellaneous
requirements
MVC Points Estimation – work flow view
MVC Points Methodology - Elements
Guidelines to identify
Models, View and
Controllers from Use
cases
Guidelines to classify
M,V and Cs into
simple, medium and
complex categories
Effort norms for
coding and unit testing
the Models, Views
and Controllers
Guidelines to
arrive at
overall effort
U
S
E
C
A
S
E
S
MVC Points – a new estimation methodology for web applications
PML 2005 Nagaraja Gundappa, Wipro Technologies 7
6. Multiply the effort for CUT by 2.5 to get the life cycle effort (Reasons explained in later sections)
7. The above steps cover typically 90% of the functionality. There will be miscellaneous work items
that cannot be modeled through the MVC. Use plain work break down structure and estimate the
effort based on experience.
8. Add buffer and project management effort to the above to get the total effort for the project.
(2.3) Using the methodology
This section describes the implementation of each step in detail. J2EE terminology has been used wherever
applicable for ease of reading and it should be noted that the same steps are equally relevant for .Net
technology as well.
Step 1 - Identifying the use cases for the application
The first step in estimation using the MVC Points methodology is to identify the use cases. It is sufficient
to identify the title of the use case and it is not necessary to go into the details although it is beneficial to
have more details. These use cases form the basis for identifying the models, views and controllers
(Referred to as MVC Points in rest of the paper) that will implement the use case.
It has been found in our experience that, for the estimation carried out at a proposal stage, where only a
high level description of the application is available, it will still be possible to identify the use cases for the
application at a title level. Experience has also shown that estimating the MVC Points based on use case
titles has been effective even though having detailed use cases can make the estimation more accurate.
Step 2 - Identify the Models, Views and Controllers For each use case, visualize the scenario getting enacted in the to-be-implemented system. One should ask
as to what type of UI will be needed (if needed) and how many? This step will yield the number of views
(JSPs). Secondly, one should visualize the control flow as well as transaction flow from GUI to the data
source and identify how many models (EJBs) and controllers (Servlets) will be required to implement this
use case. In an object oriented methodology, since we assume that the data design stems out of class
design, the effort to design the data model is accounted for in the effort to develop the Model itself.
Following are some tips to identify the MVCs:
Group the CRUDL (Create, Read, Update, Delete and List) transactions of the same entity into a
single use case. Each such unique use case will have common GUI. For instance add user, update
user, query user, delete user and list users transactions will be a part of the same use case and will
have either a single user screen or a main user screen and few small sub screens depending on the
complexity of the use case.
Deciding how many controllers are needed per use case is tricky. While on one hand, each view-
model combination can have a controller each, on the other extreme, there are implementations
where an entire application has only one controller. One needs to use OOSE (Object Oriented
Software Engineering) best practices and guidelines to resolve this. One of the OOSE guidelines
is to have an optimum number of attributes and methods in a class. Too many attributes and
methods in a class makes it un maintainable. Typically, a class should not have more than 15
methods and one method should not have more than 200 lines of code. Based on this guidelines,
one can have one controller for each view or one controller for multiple views ensuring that the
controller is not too large.
The effort to design and develop the model accounts for data design as well. However, some of
the administrative effort such as database scripts written for archival etc. should be accounted for
separately, outside the MVC model. Experience shows that about 90% of the effort can be
estimated using the MVC model and there will be certain miscellaneous 10% requirements that
have to be estimated separately using plain WBS model. The model includes the EJBs and the
helper classes also.
External interfaces that the EJB connects to do not have to be counted separately as these are
accounted for in the complexity factor.
Helper classes for EJBs are accounted for as part of the EJB. One Model is equivalent to one EJB.
MVC Points – a new estimation methodology for web applications
PML 2005 Nagaraja Gundappa, Wipro Technologies 8
Step 3 - Eliminate duplicates
More than one use case can access the same model, view or controller. Therefore while counting them for
a usecase give them a suggestive name and eliminate redundant counting of the same models, views and
controllers.
Step 4 - Classify the identified units as simple, medium or complex
Each model, view and controller should be classified as simple, medium or complex based on the richness
of functionality required to support it. A view is classified based on the following parameters:
No. of data display and control elements on the screen
Type of formatting required for the page
Amount of validation logic to be implemented in the GUI itself
A controller is classified based on the following parameters:
No. operations that the controller invokes on the model per request from the view
Complexity of the control logic such as determining what operations to invoke depending on the
results returned by the model.
Amount of functionality implemented in the controller such as storing session and state
information, maintenance of a cache etc.
A model is classified based on the following parameters:
Complexity of the business logic
Diversity of data sources that the model maintains
Complexity of the persistence mechanism used.
One should visualize the realization of a use case from end-to-end and estimate the factors influencing the
complexity and accordingly classify the models, views and controllers. Non-functional requirements
translate into complexity factors. A sample of the complexity indicators and indicative effort norms are
listed in Appendix A. Detailed complexity indicators are contained in the MVC Points manual.
Step 5 - Determining the effort for coding and unit testing
The identified and classified MVC Points are multiplied by their corresponding effort norms and added.
The effort norm is the effort in person days required to code (implement) and unit test a given MVC Point.
Following should be noted about the effort norm:
The effort norm is defined only for the coding and unit testing phase as this is the only phase in
which tasks are performed on a per MVC Point basis. That is, coding and unit testing phase
essentially consists of atomic activities to code and unit test the models, views and controllers and
nothing else. Hence, it is easier to validate the effort norm by comparing it with the actual effort
after the implementation. Other phases in the life cycle include work at an application level as
well and hence it is difficult to validate the effort norm per MVC Point. In other words, if the
effort norm defined for the models, views and controllers include the complete life cycle, then it
will be difficult to validate the same later on.
The effort norm is defined for each organization and will include the environmental and
productivity factors. That is, it is assumed that the environmental factors across an organization
are more or less similar.
The complexity factors such as those arising out of non functional requirements get factored into
the complexity indicators themselves.
Following formula summarizes the effort determination:
CUT Effort = ∑ SM * ESM + ∑ MM * EMM + ∑ CM * ECM + ∑ SC * ESC +