Archive for November, 2012

Monitoring Citrix with Operations Manager 2012

November 5, 2012 1 comment

In the earlier days if you have Operations Manager 2007 you would have MP’s available for the most of the Citrix products. On the installation media on XenApp 6.5 you would for instance have a management pack which you could use in OpsMgr 2007.
Now with 2012, Citrix have said that they would no longer continue with development of these management packs and have pushed the development to a partner called ComTrade.

ComTrade have developed a bunch of Management Packs for most of Citrix’s products including;

* XenApp
* XenDesktop
* XenServer

Now for instance Netscaler is primarily a network device so you have “free” monitoring capabilities via SNMP but for extended monitoring and pro capabilities Citrix actually has a new MP which was released in September.
When regarding the MP’s you can sign up for a free trial at ComTrade’s website here–>
I’m going to take a quick walkthrough of how XenApp monitoring is set up and how it works.

After you have received the user information you can start downloading the MP’s
The installation process is pretty straight forward, next. next, finish and the setup will automatically import the management packs.




So if you open the console and check under adminitration –> Management packs
You can now see ComTrade Management Packs appear.


If you go back to the monitoring pane, you will see that there are a bunch of new options under ComTrade XenApp


As well as under reports there a new bunch of new reports available for XenApp.


This will give you a good insight in your Citrix environment, and regarding what applications users actually use. And what kind of performance issues they might be having.
We will take a further look at this later when we are finished setting up the connection to XenApp.

When the installation process is finished you will receive a new start-menu shortcut which allows you to complete the process of setting up the monitoring, you can see a shortcut called “XenApp connector”
Here you have to enter information about the XenApp farm, a farm administrator and password.


Now remember that you have to be a farm administrator if it is to setup correctly. And you have to get a valid license from ComTrade in order to use it. After that you have to set the scom agent as an proxy you can do this under managed agents in the administration pane on SCOM.

After this you have to go to the monitoring pane and find under Comtrade XenApp servers, from there choose the XenApp server you wish to monitor. On the right side you have the option to install a XenApp MP agent, so run this command


When the installation is done (You can see this in the event viewer) you can see (in a while) that data starts being populated into SCOM.
So Yay! now we have a good and solid XenApp monitoring solution along with the rest of the infrastructure.
Now we can start monitoring SLA on our infrastructure (XenApp, Netscaler, SQL Server, Web-interface)

And as simple as that ( I have no real licenses on my XenApp server, therefore I get an error message each time I logon to the server around the licenses. ) And it also appeared in Operations Manager



Saving Money by Increasing CPU Efficiency with SCOM

November 5, 2012 Leave a comment

The advances in processor technology continue to ensure access to substantial processing power for pretty much anyone. Dell, HP, and IBM entry-level servers all come standard with substantial processing power so it’s no longer uncommon to have surplus processing power. The next logical question is about efficiency – ‘Are you effectively using all the available processing power?’ Unless a server is running north of 55% utilization, IMHO, they’re just wasting money because they are wasting electricity.

With respect to processor efficiency, I am referring to the amount of work a processor is performing relative to its potential power. There isn’t really a universally accepted model for calculating processor efficiency so I adopted my own. The inspiration for this effort was from one customer who specifically asked if OpsMgr On-Line™ could measure overall server efficiency. I thought it was a very valuable metric so I set out to build a comprehensive model to accomplish just that. Network, Disk and Memory were easy to model but CPU proved a little more of a challenge.

Performance Centric Resources

Computers draw from four core resources pools. This blog will only cover the processor because, as I mentioned earlier, it’s not as easy as you’d think. As for the other three resource pools:

•Physical Memory (RAM) – You either have enough memory or you don’t.
•Disk – The disk subsystem has two core metrics:
•Capacity – It has enough capacity or it does not.
•Performance – Applications can either read from disk or write to disk fast enough to sustain acceptable performance.
•Network – Applications can either place data onto or read data from the network fast enough to sustain performance.
Defining ‘Work’

A computer’s ‘power’ is generated from its CPU. The CPU’s performance is measured in Gigahertz (GHz) and refers to the clock speed. As a general rule, the faster the clock can tick the faster data can be processed.

When it comes to measuring how much work a processor is performing, I am a fan of the Context Switches per Second performance counter. Context switching activity is important for three reasons:

1.A program that monopolizes the processor lowers the rate of context switches because it does not allow much processor time for the other processes’ threads.
2.A high rate of context switching means that many threads of equal priority are sharing the processor repeatedly.
3.A high context-switch rate often indicates that there are too many threads competing for the processors on the system.
Because context switches/sec is a very accurate measurement of ‘what the processor is doing’ it is the ideal indicator for measuring how much work a computer is performing. Naturally, the system must be healthy – no hardware malfunctions, properly configured and other all resource pools are sufficiently sized.

The CPU Performance Index

The CPU performance index (CPI) is a number that describes the efficiency of a processor. The CPI identifies the workload of a computer’s processors at different points in time. The subsequent deltas indicate the efficiency momentum of the computer and will answer the following questions:

1.What is the maximum workload a processor can sustain?
2.Is a processor over utilized or under-utilized?
3.How long before a processor can no longer sustain its current workload?
Deriving CPU Performance Index

The CPU performance index, represented as CPI, is a numeric ratio used to identify the efficiency of a processor measured as a function of work performed. Depending on the value, you can ascertain if a processor is over utilized (resource deficient), performing optimally or has additional potential processing capacity (under utilized).

Since work is work I just borrowed the current formula for work – W=F x D.

•Work is already defined – it is represented by context switches/sec
•Force is also defined – it is represented by processor utilization.
By solving for distance (D) we get W/F=D where ‘D’ is substituted with CPI. We have already established that Context Switches/sec is relative to the role of the computer and the value is relatively steady under standard operating conditions so the ‘more force applied to each thread’, the more work accomplished.

The higher the processor utilization the more work being accomplished. It is also critical to note that the processor can’t be backlogged under normal production workloads. The relative processor queue length must reflect no stacking. The CPI value will be meaningless if the processors are backlogged at anytime. For example, you may have low utilization and a high queue length due to a insufficiently sized FSB.

So putting this all together we get:

CPU Performance Index = (Context Switches/sec) / (Percent Processor Time)

which can be simply represented as:


We are simply calculating a ratio of work to force. Assuming predefined performance variables remain within established operational limits, then the greater the force the more work accomplished. Now given we are operating under the assumption processing power is constant (for the purposes of this blog I am not addressing processors whose individual cores can be powered off), then the only way to maximize processor efficiency is to ensure the maximum amount of work is being performed, else, the surplus processing power simply is wasted in the form of heat and power consumption. In other words, if we can’t reduce the energy being consumed lets increase the workload so that the processor is accomplishing more work.

Interpreting the CPU Performance Index

A single CPI value is meaningless. Regularly measured values are needed to determine if CPI is increasing or decreasing. The objective is to make CPI as small as possible value and then ensure it remains relatively constant throughout the server’s remaining lifecycle. A computer’s processors have reached its maximum utilization when the smallest possible CPI has been reached and remains steady. There is no accepted singular value for CPI. The objective is to have the CPI trend down and remain down over time, which will indicate the computer’s processors are operating at maximum efficiency.