Archive for April, 2011

How migrating to Exchange Server 2010 can save money on storage

April 7, 2011 Leave a comment

When Steve Derbyshire, IT operations director at NEC Philips Unified Solutions UK, decided to implement Exchange Server 2010, he says he did so for three primary reasons: to get everyone on the same level of server software, to improve resiliency and to reduce storage costs, if possible.

Migrating to Exchange Server 2010 — More bang for the buck

Since migrating to Exchange Server 2010 in July 2009 as part of the Microsoft Technology Adoption Program, the company has increased its email system storage capacity by a factor of eight through the use of serial ATA (SATA) disks, Derbyshire said. This comes at only 25% the cost of new Fibre Channel (FC) disks, which would have been required if the company had maintained its mixed Exchange Server 2003 and Exchange 2007 environment, he added.

While NEC Philips’ small Exchange Server 2003 system had been on a single direct-attached storage (DAS) server, its Exchange Server 2007 environment was backed by a Fibre Channel storage area network (SAN). “We were nearing 80% or higher capacity on the SAN, and we would have had to extend it had we stayed with Exchange [Server] 2007,” said Matt Hawkins, consulting team leader at NEC Philips.

In terms of storage volume, NEC Philips had 500 GB of storage allocated to Exchange Server 2007. The company now has 6 TB for Exchange Server 2010 and spend only a quarter of what it would have to double the Fibre Channel SAN capacity to 1 TB, said Hawkins. “We’ve even done away with mailbox limits; we’ve got so much SATA that storage mailbox size is not a problem.”

“We knew going to SATA gave us the potential to do something like this, but until we got Exchange [Server] 2010 up and running, did all the calculations and made sure we had sufficient bandwidth between our two sites, we weren’t sure we were going to be able to do it,” Derbyshire added. “I thought we would [save money], but I hadn’t expected it to be quite as dramatic as this.”

To date, NEC Philips has saved roughly $3,000 in storage costs, but this is a drop in the bucket, added Derbyshire. “Ours is a small installation; extrapolate that up to a large organization and the numbers get interesting.”

More significantly, NEC Philips has eliminated third-party cold-standby and associated costs using Exchange Server 2010’s database availability group (DAG) capability. With DAG, NEC Philips can run a secondary, active server for disaster recovery. Using DAG on SATA will save the company roughly $19,000 a year, Derbyshire said.

Spinning up Exchange Server 2010 savings

Unlike its predecessors, Exchange Server 2010 offers disk input/output (I/O) that’s suitable for economical SATA disks. According to Microsoft, Exchange Server 2010’s latest mail server software lowers overall disk I/O by up to 70% compared to Exchange Server 2007.

“Exchange [Server] 2003 is a high I/O, read/write technology because it’s reading and writing all over the platter — wherever it finds open spots,” said Rand Morimoto, president of Convergent Computing, a Microsoft consulting firm in Oakland, Calif. “It’s designed that way because when it was built, there were no fast hard drives or large memory spaces. But now, when we can put 16 GB or 32 GB of RAM in a 64-bit computer and have four or eight cores for processing the information, that [technology] doesn’t make sense.”

In the background, Exchange Server 2010 defragments the disk and cleans up all open spots so that it writes information sequentially. “This might make it 20% to 30% slower to write, but the read time is 40% faster,” Morimoto said. At end of the day, this means storage cost reductions for enterprises.

“The fabric that organizations have to lay down for Exchange [Server] for storage went from $10,000 to $25,000 per terabyte of very high-speed Fibre Channel to $500 hard drives,” Morimoto said. “Do the math — $25,000 a terabyte or $500 a terabyte. Which one would you choose?”

Like NEC Philips, George Hamin, director of e-business and information systems at Subaru Canada of Mississauga, Ontario, also relishes that it no longer needs to invest in Fibre Channel drives for its Exchange Server environment.

“Originally, we had only a small portion of our user base email — basically for just the most important people, like directors and vice presidents — on the SAN, backed up and replicated,” Hamin said. “The rest of the user community had local storage. Now we’re in the process of moving everybody over to SAN without having to upgrade the SAN itself. We’ll have to add disk to accommodate the additional mailboxes, but the actual processor itself doesn’t need to change.”

This is the kind of capability that makes Exchange Server 2010 more affordable than Exchange Server 2007, which Subaru Canada migrated from per terms of its maintenance agreement, Hamin said.

SAN sensibility

With examples like these, Microsoft widely touts storage cost reductions among the chief benefits of migrating to Exchange Server 2010. In a return on investment/cost savings analysis Microsoft completed with 100 early adopters, the company found the average savings was in the 50% to 80% range, said Julia White, director of Exchange product management at Microsoft.

“We saw a ton of savings around the storage side, as well as high availability and the DAG architecture,” said White. “That’s where you see the hard-cost savings as companies look to increase mailbox sizes and use lower-cost storage to make that economically sensible,” she said.

As an example, White points to financial services firm BCG Partners. Once BCG Partners migrated to Exchange Server 2010, she said, the company was able to redeploy a $1 million SAN from email to another project, instead using a couple hundred thousand dollar DAS-based storage solution.

“Across all of early adopters that deployed on a lower-cost storage model, the numbers are pretty staggering,” White said. “Cost-savings, if not the first, is the second reason in terms of what’s compelling people to migrate to Exchange [Server] 2010.”

Morimoto, however, says he hasn’t seen cost savings from storage as a primary migration driver among his clients, although, he has no doubt that there are savings to be had. But migrating doesn’t come for free, added Morimoto, even if you’ve got a software assurance license for free upgrades.

Schultz is a longtime IT writer based in Chicago. You can reach her at


Hyper-V R2 SP1 guide: Dynamic Memory and RemoteFX

April 7, 2011 1 comment

Microsoft Hyper-V R2 Service Pack 1 (SP1), part of the new Windows Server 2008 R2 service pack, is a significant update. Hyper-V R2 SP1 sports the much-anticipated Dynamic Memory feature and a new virtual desktop protocol, RemoteFX.

Dynamic Memory, a virtual memory management technology, is Microsoft’s answer to VMware’s memory overcommit. Instead of administrators providing static quantities of memory to virtual machines (VMs), Dynamic Memory pools the host’s memory and sends resources to memory-starved VMs. It also rebalances the host’s memory in one-second intervals.

On the desktop virtualization front, RemoteFX is Microsoft’s new streaming protocol, built upon Remote Desktop Protocol (RDP). It can deliver three-dimensional graphics and dense display resolutions, and it provides USB support.

Despite the new additions, Hyper-V has yet to gain parity with vSphere in such areas as virtual networking and Hyper-V independent software vendors. But Microsoft is used to playing catch-up, whether it’s in the server market or video-game industry. And with each revision of Hyper-V, Microsoft narrows the feature gap with vSphere.

This guide takes a closer look at two features that shrink the disparity between Hyper-V R2 SP1 and vSphere: Dynamic Memory and RemoteFX.

                  DYNAMIC MEMORY IN HYPER-V R2 SP1

Administrators must configure Dynamic Memory before Hyper-V can automatically rebalance a host’s RAM. If the parameters are set incorrectly, a host’s memory will be allocated incorrectly, causing performance issues. To ensure that each virtual machine receives enough memory, review the tips below.

How virtual memory allocation works with Hyper-V Dynamic Memory
With Dynamic Memory, the hypervisor is responsible for virtual memory allocation. It pools the host’s memory and distributes it to virtual machines as needed. Users can set the parameters on how much memory a VM can use and let Hyper-V R2 SP1 adjust it on the fly.

Virtual memory settings in Hyper-V Dynamic Memory
Dynamic Memory’s virtual memory settings are adjustable, which offers more flexibility. The Memory Buffer feature, for example, reserves a predetermined amount of RAM for a VM, just in case it requires more memory before the host’s RAM is rebalanced. And the Memory Priority setting designates which VMs receive additional memory first during periods of high RAM utilization.

How to monitor virtual memory with Hyper-V Dynamic Memory
If there isn’t enough RAM to go around, Dynamic Memory will shift it to the high-priority VMs. That can hurt the performance of less-important VMs if proper monitoring isn’t in place. But you don’t have to wait for user complaints to roll in before you take action. The Hyper-V Manager Console can monitor virtual memory settings with two new reports.

Dynamic Memory best practices
Dynamic Memory requires manual configuration of the Memory Buffer and Memory Priority settings. It’s also a Dynamic Memory best practice to provide Startup RAM and Maximum RAM numbers. The Startup RAM refers to the amount of memory a VM uses to boot, and the Maximum RAM is the highest amount of memory that Hyper-V R2 SP1 allocates to a VM.

Hyper-V Dynamic Memory vs. VMware memory overcommit
Hyper-V Dynamic Memory and VMware memory overcommit address dynamic memory allocation in different ways. With memory overcommit, users can allocate more memory to virtual machines than a host has available. In Hyper-V R2 SP1, Dynamic Memory continually rebalances the host memory, according to parameters set by the administrator. But it can’t allocate more memory than the host has available.

                                      REMOTEFX IN HYPER-V R2 SP1

The infrastructure requirements for RemoteFX are restrictive, to say the least. RemoteFX, for example, can stream only Windows 7 SP1 virtual desktops, so IT shops with Windows XP virtual desktops are out of luck. Also, you need a Hyper-V R2 SP1 back end, which means other virtualization platforms cannot run RemoteFX.

What you need to know about Microsoft RemoteFX
Microsoft RemoteFX is a powerful protocol, designed to make the virtual desktop experience almost indistinguishable from using a local machine. To use RemoteFX, however, you must meet Microsoft’s strict requirements. So read the fine print.

Comparing Microsoft RemoteFX to VMware PCoIP
Microsoft RemoteFX and VMware PCoIP are similar virtual desktop technologies. Both protocols stream desktops to the users, with the hosts handing the processing on the back end. But RemoteFX require the hosts to have a GPU add-in card. PCoIP, on the other hand, can run on normal hardware, but performance can suffer if users run multimedia-intensive applications.

The differences between Microsoft RemoteFX and Citrix HDX
Comparing Microsoft RemoteFX and Citrix HDX is not apples to apples. For one, HDX works on a wide variety of platforms and hardware, unlike RemoteFX, which has strict software and hardware requirements. Additionally, RemoteFX works only on LANs. To stream desktops across a wide area network, you’ll need to use RDP, which doesn’t perform as well as HDX.

The power and promise of RemoteFX
Microsoft’s streaming technologies have come a long way since the Terminal Services days. RemoteFX supports several advanced codecs — that is, a device or program that can encode and/or decode a digital data stream that provide a richer user experience. It also provides USB redirection, the use of USB peripherals on virtual desktops, with no client-side drivers to load.

How to tell if you’re actually using RemoteFX
If you have the proper configuration, it’s easy to enable the RemoteFX role under Remote Desktop Services. But how can you tell if you’re actually using RemoteFX? Well, there are certain clues — such as Start menu options and Event Viewer prompts — that will let you know for sure.


Office 365 will change your job, not kill it

April 7, 2011 1 comment

Tony Redmond, an Exchange MVP, told IT managers that if they saw their future merely as Exchange administrators, they would be “toast,” asserting that many Exchange organizations are ready to consider hosted messaging services. Redmond said those who embrace and educate themselves about the cloud and Office 365 could prolong their careers.

Microsoft Office 365 bundles Microsoft Office, SharePoint Online, Exchange Online and Lync Online into a single package. The company plans to release it sometime later this year.

If willing and able, Exchange administrators can become Office 365 administrators. While Office 365 support takes care of things like backup and recovery, patch management and virus and spam prevention, enterprises will still need in-house administrators to focus on messaging records management, transport rules, email disclaimers, retention policies, role management and more.

So how can admins prepare for Office 365? Mike Crowley, an Exchange MVP and enterprise infrastructure architect with Planet Technologies, Inc., suggested admins regularly visit the Office 365 technical blog.

Other experts suggested IT staff familiarize themselves with remote PowerShell in order to modify user properties in Office 365. PowerShell commands also allow you to perform homegrown applications moves. At a separate DevConnections session, Crowley also mentioned you can use PowerShell to prepare Active Directory for a move to Office 365.

Crowley noted that admins, particularly those managing BlackBerry users, should be energized about Office 365, confirming that the product will offer free BlackBerry licenses to current customers. RIM will also offer a cloud-based BES service.