Enhancing the Efficiency and Effectiveness of Application Development

Software has become critical for most large enterprises. They should adopt a reliable output metric that is integrated with the process for gathering application requirements

October 2015 | by Steve Johnson

Most large companies invest heavily in application development, and they do so for a compelling reason: their future might depend on it. Software spending in the United States jumped from 32 percent of total IT corporate investment in 1990 to almost 60 percent in as software gradually became critical for almost every company’s performance. Yet in our experience, few organizations have a viable means of measuring the output of their application-development projects. Instead, they rely on input-based metrics, such as the hourly cost of developers, variance to budget, or percent of delivery dates achieved. Although these metrics are useful because they indicate the level of effort that goes into application development, these metrics do not truly answer the question: how much software functionality did a team deliver in a given time period? Or, put another way, how productive was the application-development group?

The Transformation Challenge

Organizations that have successfully adopted use cases and use-case points have usually started with a pilot that may involve several teams and a portfolio of new projects on which to test the new approach. The organization will need to design the processes and tools to make use cases and use-case points operational. For example, the organization will need to address such questions as what template or tool the team should use for capturing UCs and calculating UCPs, how the organization will ensure that everyone is following the standard process, and how the metrics will be displayed and discussed.

Once the new design is complete, the pilot teams will train with the new processes and tools. Pilot teams can use previously completed projects to practice creating UCs and calculating UCPs. From there, the organization runs a pilot on actual projects to refine the processes and tools while addressing any gaps in the design. After completion of the pilot, organizations usually roll out UCs and UCPs more broadly in waves across the organization.

Throughout this process, it is critical to communicate a compelling change story. For example, the pilot team will need to explain the benefits of use cases to the business units, which naturally will be sensitive to any changes in the way requirements are gathered. Perhaps more important, there will likely be some resistance from within the development teams, whose members may not enjoy having their productivity measured.

What is critical for the ultimate acceptance of UCPs is how the leadership uses them. Developers will understand the rationale for using metrics to identify projects that are at risk of going off track. They will also understand the benefits of more accurately determining resources and timelines for projects, without over or underscoping functional requirements. There is little that is more frustrating to application-development teams than pulling all-nighters to deliver what the business doesn’t want or doesn’t need, and then having to redo much of their hard work. If, however, UCPs are used merely as a means of rewarding or penalizing application developers, there is a much higher probability that there will be serious resistance.

The journey toward integrating a more efficient and effective way of gathering application-development requirements with a reliable output metric is not without its difficulties. However, the rewards are well worth the effort in a world where application development is an important key to success for almost any large enterprise.

 

Executive Editor

 Ms Anna Sullivan

Ms Anna Sullivan