SEER-SEM

SEER for Software (SEER-SEM) is a project management application used to estimate resources required for software development.

History
1966 System Development Corporation Model based on regressions.

1980 Don Reifer and Dan Galorath paper which prompted the building of the JPL Softcost model. This model, an early example of software estimation, allows for automated and performed risk analysis. Softcost was later made a commercial product by Reifer Consultants.

1984 Computer Economics JS-2 and Galorath Designed System-3 based on the Jensen model.

The Jensen-inspired System-3, and other modeling systems like Barry Boehm's COCOMO and early works by the Doty Associates can be seen as direct and indirect contributors to the software suite that would be developed by Galorath in the late 1980s.

In 1988, Galorath Incorporated began work on the initial version of SEER-SEM.

Group of Models
SEER for Software (SEER-SEM) is composed of a group of models working together to provide estimates of effort, duration, staffing, and defects. These models can be briefly described by the questions they answer:


 * Sizing. How large is the software project being estimated (Lines of Code, Function Points, Use Cases, etc.)
 * Technology. What is the possible productivity of the developers (capabilities, tools, practices, etc.)
 * Effort and Schedule Calculation. What amount of effort and time are required to complete the project?
 * Constrained Effort/Schedule Calculation. How does the expected project outcome change when schedule and staffing constraints are applied?
 * Activity and Labor Allocation. How should activities and labor be allocated into the estimate?
 * Cost Calculation. Given expected effort, duration, and the labor allocation, how much will the project cost?
 * Defect Calculation. Given product type, project duration, and other information, what is the expected, objective quality of the delivered software?
 * Maintenance Effort Calculation. How much effort will be required to adequately maintain and upgrade a fielded software system?
 * Progress. How is the project progressing and where will it end up.  Also how to replan.
 * Validity. Is this development achievable based on the technology involved?

Software Sizing
Software size is a key input to any estimating model and across most software parametric models. Supported sizing metrics include source lines of code (SLOC), function points, function-based sizing (FBS) and a range of other measures. They are translated for internal use into effective size ($$S_{e}$$). $$S_{e}$$ is a form of common currency within the model and enables new, reused, and even commercial off-the-shelf code to be mixed for an integrated analysis of the software development process. The generic calculation for $$S_{e}$$ is:


 * $$S_{e} = (\text{new size}) + (\text{existing size}) \times (0.4 \times \text{redesign} + 0.25 \times \text{reimpl} + 0.35 \times \text{retest})$$

As indicated, $$S_{e}$$ increases in direct proportion to the amount of new software being developed. $$S_{e}$$ increases by a lesser amount as preexisting code is reused in a project. The extent of this increase is governed by the amount of rework (redesign, re-implementation, and retest) required to reuse the code.

Function-Based Sizing
While SLOC is an accepted way of measuring the absolute size of code from the developer's perspective, metrics such as function points capture software size functionally from the user's perspective. The function-based sizing (FBS) metric extends function points so that hidden parts of software such as complex algorithms can be sized more readily. FBS is translated directly into unadjusted function points (UFP).

In SEER-SEM, all size metrics are translated to $$S_{e}$$, including those entered using FBS. This is not a simple conversion, i.e., not a language-driven adjustment as is done with the much-derided backfiring method. Rather, the model incorporates factors, including phase at estimate, operating environment, application type, and application complexity. All these considerations significantly affect the mapping between functional size and $$S_{e}$$. After FBS is translated into function points, it is then converted into $$S_{e}$$ as:


 * $$S_{e} = L_x * (AdjFactor * UFP)^{\frac{Entropy}{1.2}}$$

where,


 * $$L_x$$ is a language-dependent expansion factor.
 * $$AdjFactor$$ is the outcome of calculations involving other factors mentioned above. Entropy ranges from 1.04 to 1.2 depending on the type of software being developed.

Effort and Duration Calculations
A project's effort and duration are interrelated, as is reflected in their calculation within the model. Effort drives duration, notwithstanding productivity-related feedback between duration constraints and effort. The basic effort equation is:


 * $$K = D^{0.4}\cdot\left(\frac{S_{e}}{C_{te}}\right)^{E}$$

where,


 * $$S_{e}$$ is effective size - introduced earlier
 * $$C_{te}$$ is effective technology - a composite metric that captures factors relating to the efficiency or productivity with which development can be carried out. An extensive set of people, process, and product parameters feed into the effective technology rating. A higher rating means that development will be more productive
 * $D$ is staffing complexity - a rating of the project's inherent difficulty in terms of the rate at which staff are added to a project.
 * $E$ is the entropy -  In days gone by entropy was fixed at 1.2. Next it evolved to 1.04 to 1.2 depending on project attributes with smaller IT oriented projects tending toward the lower.  Currently entropy is observed as 1.0 to 1.2 depending on project attributes.  SEER will allow an entropy less than 1.0 if such a circumstance is observed as well.

Once effort is obtained, duration is solved using the following equation:


 * $$t_{d} = D^{-0.2}\cdot\left(\frac{S_{e}}{C_{te}}\right)^{0.4}$$

The duration equation is derived from key formulaic relationships. Its $0.4$ exponent indicates that as a project's size increases, duration also increases, though less than proportionally. This size-duration relationship is also used in component-level scheduling algorithms with task overlaps computed to fall within total estimated project duration.