Home > SCOM 2012, System Center Family, Tips&Tricks > Saving Money by Increasing CPU Efficiency with SCOM

Saving Money by Increasing CPU Efficiency with SCOM

The advances in processor technology continue to ensure access to substantial processing power for pretty much anyone. Dell, HP, and IBM entry-level servers all come standard with substantial processing power so it’s no longer uncommon to have surplus processing power. The next logical question is about efficiency – ‘Are you effectively using all the available processing power?’ Unless a server is running north of 55% utilization, IMHO, they’re just wasting money because they are wasting electricity.

With respect to processor efficiency, I am referring to the amount of work a processor is performing relative to its potential power. There isn’t really a universally accepted model for calculating processor efficiency so I adopted my own. The inspiration for this effort was from one customer who specifically asked if OpsMgr On-Line™ could measure overall server efficiency. I thought it was a very valuable metric so I set out to build a comprehensive model to accomplish just that. Network, Disk and Memory were easy to model but CPU proved a little more of a challenge.

Performance Centric Resources

Computers draw from four core resources pools. This blog will only cover the processor because, as I mentioned earlier, it’s not as easy as you’d think. As for the other three resource pools:

•Physical Memory (RAM) – You either have enough memory or you don’t.
•Disk – The disk subsystem has two core metrics:
•Capacity – It has enough capacity or it does not.
•Performance – Applications can either read from disk or write to disk fast enough to sustain acceptable performance.
•Network – Applications can either place data onto or read data from the network fast enough to sustain performance.
Defining ‘Work’

A computer’s ‘power’ is generated from its CPU. The CPU’s performance is measured in Gigahertz (GHz) and refers to the clock speed. As a general rule, the faster the clock can tick the faster data can be processed.

When it comes to measuring how much work a processor is performing, I am a fan of the Context Switches per Second performance counter. Context switching activity is important for three reasons:

1.A program that monopolizes the processor lowers the rate of context switches because it does not allow much processor time for the other processes’ threads.
2.A high rate of context switching means that many threads of equal priority are sharing the processor repeatedly.
3.A high context-switch rate often indicates that there are too many threads competing for the processors on the system.
Because context switches/sec is a very accurate measurement of ‘what the processor is doing’ it is the ideal indicator for measuring how much work a computer is performing. Naturally, the system must be healthy – no hardware malfunctions, properly configured and other all resource pools are sufficiently sized.

The CPU Performance Index

The CPU performance index (CPI) is a number that describes the efficiency of a processor. The CPI identifies the workload of a computer’s processors at different points in time. The subsequent deltas indicate the efficiency momentum of the computer and will answer the following questions:

1.What is the maximum workload a processor can sustain?
2.Is a processor over utilized or under-utilized?
3.How long before a processor can no longer sustain its current workload?
Deriving CPU Performance Index

The CPU performance index, represented as CPI, is a numeric ratio used to identify the efficiency of a processor measured as a function of work performed. Depending on the value, you can ascertain if a processor is over utilized (resource deficient), performing optimally or has additional potential processing capacity (under utilized).

Since work is work I just borrowed the current formula for work – W=F x D.

•Work is already defined – it is represented by context switches/sec
•Force is also defined – it is represented by processor utilization.
By solving for distance (D) we get W/F=D where ‘D’ is substituted with CPI. We have already established that Context Switches/sec is relative to the role of the computer and the value is relatively steady under standard operating conditions so the ‘more force applied to each thread’, the more work accomplished.

The higher the processor utilization the more work being accomplished. It is also critical to note that the processor can’t be backlogged under normal production workloads. The relative processor queue length must reflect no stacking. The CPI value will be meaningless if the processors are backlogged at anytime. For example, you may have low utilization and a high queue length due to a insufficiently sized FSB.

So putting this all together we get:

CPU Performance Index = (Context Switches/sec) / (Percent Processor Time)

which can be simply represented as:


We are simply calculating a ratio of work to force. Assuming predefined performance variables remain within established operational limits, then the greater the force the more work accomplished. Now given we are operating under the assumption processing power is constant (for the purposes of this blog I am not addressing processors whose individual cores can be powered off), then the only way to maximize processor efficiency is to ensure the maximum amount of work is being performed, else, the surplus processing power simply is wasted in the form of heat and power consumption. In other words, if we can’t reduce the energy being consumed lets increase the workload so that the processor is accomplishing more work.

Interpreting the CPU Performance Index

A single CPI value is meaningless. Regularly measured values are needed to determine if CPI is increasing or decreasing. The objective is to make CPI as small as possible value and then ensure it remains relatively constant throughout the server’s remaining lifecycle. A computer’s processors have reached its maximum utilization when the smallest possible CPI has been reached and remains steady. There is no accepted singular value for CPI. The objective is to have the CPI trend down and remain down over time, which will indicate the computer’s processors are operating at maximum efficiency.

  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: