A few months ago, whilst idling researching something using the Wikipedia rat-hole method (where you find an interesting link which you follow eventually forgetting what you were looking for in the first place), I stumbled across the Jevons’s Paradox, which is an economic effect first noticed in 1865 by an English economist William Jevons. The Paradox is that whilst technology improvements may make a single use of any resource more efficiency, the effect is that overall demand for that resource increases. Whilst Jevons initial focus was on around coal consumption, continued research has focused on economics, in particular the Rebound effect. This looks at why particular improvements might not provide the expected gains, that may have been seen at an experimental level and not seen during mainstream adation.
In the 1980’s the Khazzoom-Brookes postulate looked specifically at energy efficiency following the oil crisis in the mid late 70’s and how a drive to produce more fuel efficient cars would infact drive up demand for oil, rather than reducing it. The individual reaction to a more efficient car might be to drive further on the same budget and would encourage some people to use cars over other forms of travel (such as train, bus or walking). The overall collective impact would be for an increased compusion of the resource, in this case oil.
Can this postulate be applied to IT ? For sure, technical developments since the 1960’s and 70’s have seen greater resources (power, people) being used to fulfill the collective demand for Information Technology solutions, as they’ve automated and improved upon analogue and manual systems, but this is a change of system, so the resource increases should be expected, alongside a reduction in resources being used for systems being replaced. Over the last 5 years, one of the arguments being about DevOps, Platform-as-a-Service, microservices etc, is that new wave of IT development is that it is more efficient. Is it analogous to say that by having a more efficient system for developing and operating a specific application and workload that the overall demand for IT resources will increase ?
For application development and operation, resource could be:
* developers, operators and other IT staff, like project managers and architects. This can be measured in terms of hours worked, needs for training, as well as recruiting and onboarding.
* hardware and infrastructure software (used as the basis for operations and development) in terms of licensing, support etc. CPU, storage etc.
From purely a development point of view, the implementation of a more efficient platform and associated processes might reduce the time a developer needs to do some tasks on that specific project, such as the need to set-up development or staging environments or write unique test scripts, as this will be automated or they will be able to reuse things from other projects, on the shared platform. Using DevOps principles and the removal of bottlenecks on the process, the efficiency of developing a specific application should improve significantly. Whilst metrics are used to record this for specific projects and for IT organisations as a whole, they are used to show an improvement in efficient and highlight what work is being done.
For example, the DevOps scorecard shown above is from a short article by Payal Chakravarty of IBM with some suggested metrics. It doesn’t include any indicators on resource apart from #7 which is about customer usage, and it would be interesting to see if the number of developers, the hours they work, size of disk and CPU usage increased by moving to a DevOps process from one which was simply just Agile or Waterfall. These suggested, additional metrics could be used to measure the resource usage and to see if it increased or decreased. This might be essential if one of the briefs of implementing DevOps or modernizing applications or development is to reduce the overall cost of development. Specific application development costs might be cheaper, but the overall IT spend might increase as per the Khazzoom-Brookes postulate might apply.
To measure this, you would need to look at:
* overall developer hours for the IT department (as well as measuring PM, Ops and other staff)
* number of IT staff
* number of concurrent application workloads, in development and in production
* number of requests for application development, whether this is new, modernization or update
The aim would be see if the resources applied to application development and operation is increasing and if it is, as what proportion is this against the number of projects and applications being managed. As well as looking for evidence for the existence of Jeavons’s Paradox, an IT organisaton will need to look for the diminishing returns (or possibly negative returns) through development of the system. However, when looking at any proposed modernization or development of an IT technology, process or organisation, it might be good to assume that overall resource usage is going up. You might expect it for a number of reasons anyway, as
* cost of providing training on new techniques and methods
* need to design and deliver a new or migrated application platform
* increased complexity required with new technology
* need to both maintain existing applications as well as developing new ones.
But if these additional costs are taken on board, simply the increased adoption and reliance of the improved systems, as well as the need to maintain the legacy (or technical debt) will mean that overall costs will increase anyway. Therefore in calculating return on investment, the time for payback on any decision for development should to take into account the increased consumption of resources and the need to provide more. Some further analysis will appear in another article, as well as looking how the tragedy of the commons should be taken into account in measuring the impact of IT development and change.