In the first release of a new development project, the only thing to base an estimate on is the initial user stories. These stories often contain keywords that can be used to generate an initial rough estimate. The keywords have already been discussed in Add a General or Marco Process and Adding an Elementary Process. The keywords are repeated here but they are grouped together in alphabetical order for ease of use.
The words that make up the user stories are searched to see if any are in the lists above. If one is found, then the function point value is used for the size estimate. If more than one is found, then the largest function point value is chosen. If none is found, then the user story must be decomposed to one or more of the processes described in the remainder of this post.
There are pros and cons of basing the estimate on the keywords in the user stories. The pro is that it is quick and easy. With no real analysis, it is possible to get a very rough estimate of size. This early estimate can alert people to any misconceptions regarding the size of a project. It gets people thinking about the proper order of magnitude of the project. There are many cons regarding this approach. First, it does not always work. Many of the user stories may have none of the above keywords. They will have to be further analyzed in order for their size to be estimated. Even when user stories contain one or more of the keywords, the estimate of size can turn out to be very inaccurate. It would be irresponsible to plan a project simply based on user story keywords.
When estimating the first release of a new development project, all of the functionality is being added. There will probably be several iterations (sprints) that must be performed to deliver this release. Some of these iterations may revisit some piece of functionality. However, the estimate is being performed for the release. Revisiting functionality and refactoring in general are not separately estimated. They are part of the work necessary to deliver the release. Therefore, the function point counts of the new functionality are the only relevant measure of software size.
Occasionally, a project will be to make changes to an existing application. It may be that a legacy application is being modernized or enhanced. Another possibility is that some software has been purchased with the idea of enhancing it. These cases are not new development. They are enhancements. They have to be estimated like the subsequent releases of a new development project. Since they involve both new and changed functionality, their size will be measured in both function points and SNAP points. It is as if someone else did not first release of the application and that that time is not being estimated for this project. However, the elementary processes that are being modified as part of this release must be captured and used to generate the SNAP estimates. Any new functionality will be tracked and estimated in terms of function points.
User stories are usually decomposed into typical, general and macro processes. These processes were describe in Add a Typical Process and Add a General or Marco Process. In many ways, typical and general processes represent the sweet spot in estimating based on initial user stories. These stories often lack the details necessary to get down the the elementary process level. A repeat of the typical process description and weights follows:
If there is an initial user story like “As a customer service representative, I can maintain a customer profile,” there is a question of what elementary processes might be included. Would the CSR be able to delete a customer profile? Would there be any type of report produced, like a list of the customers whose profiles were altered? These decisions might not be made until the story was being implemented. In the meantime, assuming that the story was a medium size typical process with 21.1 function points might be as precise an estimate as possible. Besides, the time that would be required to make a more precise decision might be better spent on other activities that a product owner has to perform.
A repeat of the general and macro process descriptions and weights follows:
Epic stories often correspond to general processes. There are times when an input file of many different transaction types are being processed and this becomes a general process whose size depends upon the number of different transaction types involved. Macro processes are usually based on previous experience that the organization has. For example, the organization may have several systems where it was required to assign a credit limit to a person or organization. Each of these had between 5 and 7 general processes. Then this macro process would be estimated as 285.9 function points. Care must be taken when using macro process because two processes may seem similar but be very different.
Sometimes the best approach is to decompose the user stories into elementary processes. This is not an all or nothing decision. Some stories are obviously typical processes and should be estimated as such. Others should be broken into elementary processes using the guidance discussed in Adding an Input Screen, Adding an Output Screen and Adding an Elementary Process. Often, a user story will contain several elementary processes. The following steps are used to assign weights to each one:
- Decide whether the process is an input or an output. The inputs are considered generic external inputs and estimated at 4.2 function points. Occasionally, it is not obvious whether the process is an input or an output. In these cases estimate the process as an unspecified process with a weight of 4.4 function points.
- If the process is an output, then decide whether it is an external output or an external query. Generic outputs have a weight of 5.2 function points; generic queries, 3.9. It is not always possible to decide based on the initial user stories. Outputs have calculations or update internal logical files. It is usually possible to make an educated guess. The few times that such guesses are wrong will tend to cancel out. If there is no basis for a guess, then estimate the process as an unspecified generic output with a function point weight of 4.6.
Any of these estimates external inputs, external outputs and external queries can be made more precise by assigning function points based in the number of file types referenced (FTRs) and data element types (DETs). This process was explained in the posts referred to above. However, it seldom makes sense to attempt to go that far. The FTRs and DETs can not usually be ascertained based on the initial user stories. Even if it were possible to find this information out, the time spent counting function points would be better spent on other estimating or system development tasks.
Data groups must also be identified and counted. In user stories, the nouns usually correspond to a data group. For example, “As a trader, I can log my trades.” would probably indicate that there was a Trades data group. It would probably be an ILF. Unspecified ILFs can be estimated as 7.7 function points. The size of an ILF can be counted if the number of DETs and record element types (RETs) are known. They seldom are based on the initial user stories. Once again, it is usually not worth the time and effort of generating an exact count.
If there is a story “As an investor, I can have a portfolio valuation based on current stock prices,” then there are two groups of data. Portfolios is probably an ILF. Stock Prices may be an ILF or an EIF based upon whether it is being maintained by the application being developed or another application. If it is an EIF, it will be difficult to estimate because it might contain more than one ILF in the application that is maintaining it. There might be a separate logical file to maintain stock split information. If Stock Prices was a single EIF, it should be estimated as a Unspecified EIF with a function point weight of 5.4. If the estimator thinks that it consists of a couple of logical files, then the table below would be used to estimate it as a general data group with a weight of 21.4 function points.
|Size||Unspecified Logical Files||Function Points|
When sizing the data groups, it is critical not to double count. There may be several user stories that reference Portfolios but there is only one logical file or data group. There is not one for each file reference.
Once all of the above items have been identified and sized in function points, sizing the project is a matter of adding up the function points. Size is a tricky concept. The number of function points is a measure of functional size to the users of the application. This size must be weighted by the development tools, i.e. computer language, being used for implementation. There are also other factors to consider beyond the number of function points but this is the main driver of any estimate being produced.