Archive

Archive for the ‘Hyper-V R2 & virtualization’ Category

Deploy Exchange 2013 using Service Templates!

October 20, 2013 Leave a comment

awesome post must see regarding how to  Deploy Exchange 2013 using Service Templates!

Advertisements

Microsoft Private Cloud

September 9, 2013 Leave a comment

Microsoft Hyper-V | Take Your Virtualization Skills to the Next Level Microsoft Hyper-V | Take Your Virtualization Skills to the Next Level.

http://www.virtualizationsquared.com/

Best of TechEd 2013 – What’s New in Private Cloud with System Center 2012 R2 VMM

July 20, 2013 Leave a comment

System Center 2012 R2 VMM – What’s New in Private Cloud

Vijay Tewari, Principal Group Program Manager on the VMM team, delivered an awesome demo-heavy session last week at TechEd 2013 that provided an overview of new and enhanced Private Cloud management capabilities that will be available in the forthcoming release of System Center 2012 R2 VMM.  Below, I’ve provided a clickable index to Vijay’s recorded session, so that you can easily concentrate on learning about the areas that are most important to you.

http://media.ch9.ms/ch9/7f27/fb3f5915-4bfb-491f-8a05-b4a84e437f27/MDC-B357.wmv

Download the deck and video for offline viewing.

Session Index – What’s New in Private Cloud with System Center 2012 R2 VMM

    • Customer Private Clouds
    • Service Provider Clouds
    • Windows Azure Cloud
    • Consistent Building Blocks
    • Consistent Management Experience
  • [ 08:25 ] Cloud – Demystified
    • Pool of compute, storage and networking resources
    • Elastic – Allocable on demand to your “customers”
    • Automate everything – 540+ PowerShell Cmdlets in VMM!
    • Usage-based Metering – Chargeback / Showback
    • Self-Service – Role-based Delegation and Access Control
  • [ 13:28 ] Enabling Private Cloud with System Center 2012 R2 VMM
    • Storage – Use any kind of Storage: DAS, SAN, NAS, Windows Server 2012 File Server, Scale-out File Server Cluster
    • Networking – Management of physical network switches via OMI as well as virtual network infrastructure ( PVLANs, NV-GRE Virtualized Networks, NV-GRE Gateways )
    • Virtualization host agnostic – Intel/AMD/OEM Hardware running Windows Server 2012/R2/2008 R2 Hyper-V, VMware or Citrix XenServer
    • Configure and deploy physical Storage, Network and Virtualization hosts as Private Cloud Fabric
    • Define Pooled Resources as “Clouds” based on SLA
    • Delegate Cloud Capacity to Self-Service Users
    • Model Applications using VMM Service Templates for consistent deployment of application workloads
  • [ 19:47 ] Announcing Cisco Nexus 1000V Switch
    • Advanced NX-OS feature-set
    • Innovative Services architecture ( vPath )
    • Consistent operational model
    • Integration today with Windows Server 2012 and System Center 2012 SP1 VMM
    • Already working together on future release of 1000V for Windows Server 2012 R2 and System Center 2012 R2 VMM
  • [ 20:36 ] 3rd Party Storage Management in VMM
    • Standards-based management using SMI-S
    • Test and confirm interoperability at ongoing storage industry plug-fests
    • Dramatically simplifies provisioning and management of storage
  • [ 22:03 ] Key Investment Areas in System Center 2012 R2 VMM
    • Services
    • VMs
    • Clouds
    • Networking
    • Storage
    • Infrastructure
  • [ 23:36 ] Introduction to Private Cloud Case Study Scenarios
    • Wingtip Toys – Enterprise company
    • Contoso Hosting – Service Provider
  • [ 29:44 ] Think “Stamps” for Private Cloud Consistency
    • Stamps are a unit of compute, storage and networking ( scale unit )
    • Managed by System Center
    • One datacenter could have multiple stamps
    • Disaster recovery across stamps
    • Logical view of a Stamp
    • Physical view of a Stamp
  • [ 33:24 ] Bootstrapping a repeatable architecture
    • Create initial storage and management infrastructure first
    • Use VMM and service templates to deploy other management scale units
      • System Center components planned to be made available as VMM Service Templates in the future.
    • Configure storage, networking and edge scale units
    • Deploy more templates for more stamps
  • [ 34:24 ] Network Overview
    • Public Internet
    • Corporate
    • Management
    • Internal – Live Migration, Cluster, Storage
    • Tenant Networks
    • Isolation – Internet vs Datacenter vs Tenants
  • [ 35:54 ] Storage Overview
    • 3rd Party SAN – iSCSI or Fibre Channel
    • 3rd Party NAS
    • Windows Server 2012 R2 Scale-Out File Server Cluster
  • [ 37:18 ] DEMO: Bare-Metal Provisioning Scale-Out File Server Cluster and Storage Spaces
    • SAN-like features at a commodity cost point
    • Just a Bunch Of Disks ( JBOD ) with Enterprise SSD / SAS disks for inexpensive shared storage
    • Storage Spaces for resilient tiering
    • VMM used for bare-metal provisioning of Scale-Out File Server Cluster and Storage Spaces
    • Library > Physical Computer Profile
    • Fabric > Create File Server Cluster
    • Fabric > Manage Pools > New Storage Pool Wizard
    • Fabric > Create File Share
    • Fabric > Cluster > Properties > Add File Share Path for VM Deployments
  • [ 45:55 ] DEMO: Provisioning Synthetic Fibre Channel in Guest VMs using VMM
    • Provides VMs with access to shared Fibre Channel Storage
    • Preserve investment in existing Fibre Channel SANs
    • Simplifies Fibre Channel Zone Management
    • VMs and Services > Guest VM > Properties > Storage > Add Fibre Channel Array > Create New Zone
    • VMs and Services > Guest VM > Properties > Storage > Add Disk > Create LUN
  • [ 50:03 ] Reduce VM Provisioning Time with ODX in VMM
    • Use ODX from VMM Library to Hyper-V virtualization hosts
  • [ 51:17 ] DEMO: Guest Clustering with Shared VHDXs
    • Automate Creation of Guest Clusters using new Script Options within VMM Service Templates
    • First VM can have its own script to provision new cluster
    • Second and Subsequent VMs can run separate script to join cluster
    • Shared VHDX stored on Scale-Out File Server Cluster
    • Library > Service Template Designer > Tier Properties > Hardware Configuration > Bus Configuration > SCSI Disk > Share this disk across the service tier
    • Library > Service Template Designer > Tier Properties > Application Configuration > Scripts > Creation: First VM
    • Library > Service Template Designer > Tier Properties > Application Configuration > Scripts > Creation: Other VMs
  • [ 58:56 ] VMM Integration with IP Address Management ( IPAM )
    • Centralized Management of Logical and Physical Networks
    • VMM Logical Networks appear in Windows Server 2012 IPAM Tools
    • Windows Server 2012 IPAM can be used to provision new VMM Logical Networks side-by-side with Physical Networks
  • [ 59:46 ] DEMO: Managing Top of Rack ( ToR ) Network Switch Compliance
    • Uses OMI to communicate with physical network switches for configuration validation and remediation
    • Fabric > Logical Networks > Host > NIC > Compliance errors
    • Fabric > Logical Networks > Host > NIC > Remediate
  • [ 1:03:24 ] DEMO: Hybrid Networking with Windows Azure Pack and System Center 2012 R2 VMM
    • Self-service virtual network provisioning and management via Windows Azure Pack
    • Builds on foundation in Windows Server 2012 R2 and System Center 2012 R2
      • NVGRE, IPA, Switch Extensions
      • Uses VMM as control plane
    • Isolated virtual networks running on shared network infrastructure
    • In-box multitenant edge gateway in Windows Server 2012 R2
      • Provides connectivity between physical and virtualized networks
      • VMM Service Template planned for automating the provisioning of edge gateways
    • VMs and Services > VM Networks > Properties
    • Library > Service Templates > Windows Server 2012 R2 Edge Gateway
  • [ 1:11:02 ] DEMO: Windows Azure Hyper-V Recovery Manager
    • Automate and Orchestrate Disaster Recovery between Private Clouds at different datacenters
    • Windows Azure Management Portal > Recovery Services > Recovery Plan
      • Protected Items defined for each Private Cloud
      • Define Recovery Plan for orchestrated DR failover process
    • VMs and Services > VM > Enable Recovery
  • [ 1:15:02 ] Delegating Access Per Private Cloud
    • Assign different permissions for different clouds
    • Restricted permissions to resource gold cloud
    • Full permissions to silver cloud
  • [ 1:15:37 ] DEMO: OM Dashboard for VMM Fabric Monitoring
    • Operations Manager > Monitoring > Cloud Health > Fabric Health
  • [ 1:17:28 ] Session Summary and Q&A
    • System Center 2012 R2 VMM Builds on investments in Windows Server 2012 and System Center 2012 SP1
    • Enables End-to-End Scenarios for CloudOS
    • Full list of new features in System Center 2012 R2 VMM in slide deck appendix

Hyper-V R2 SP1 guide: Dynamic Memory and RemoteFX

April 7, 2011 1 comment

Microsoft Hyper-V R2 Service Pack 1 (SP1), part of the new Windows Server 2008 R2 service pack, is a significant update. Hyper-V R2 SP1 sports the much-anticipated Dynamic Memory feature and a new virtual desktop protocol, RemoteFX.

Dynamic Memory, a virtual memory management technology, is Microsoft’s answer to VMware’s memory overcommit. Instead of administrators providing static quantities of memory to virtual machines (VMs), Dynamic Memory pools the host’s memory and sends resources to memory-starved VMs. It also rebalances the host’s memory in one-second intervals.

On the desktop virtualization front, RemoteFX is Microsoft’s new streaming protocol, built upon Remote Desktop Protocol (RDP). It can deliver three-dimensional graphics and dense display resolutions, and it provides USB support.

Despite the new additions, Hyper-V has yet to gain parity with vSphere in such areas as virtual networking and Hyper-V independent software vendors. But Microsoft is used to playing catch-up, whether it’s in the server market or video-game industry. And with each revision of Hyper-V, Microsoft narrows the feature gap with vSphere.

This guide takes a closer look at two features that shrink the disparity between Hyper-V R2 SP1 and vSphere: Dynamic Memory and RemoteFX.

                  DYNAMIC MEMORY IN HYPER-V R2 SP1

Administrators must configure Dynamic Memory before Hyper-V can automatically rebalance a host’s RAM. If the parameters are set incorrectly, a host’s memory will be allocated incorrectly, causing performance issues. To ensure that each virtual machine receives enough memory, review the tips below.

How virtual memory allocation works with Hyper-V Dynamic Memory
With Dynamic Memory, the hypervisor is responsible for virtual memory allocation. It pools the host’s memory and distributes it to virtual machines as needed. Users can set the parameters on how much memory a VM can use and let Hyper-V R2 SP1 adjust it on the fly.

Virtual memory settings in Hyper-V Dynamic Memory
Dynamic Memory’s virtual memory settings are adjustable, which offers more flexibility. The Memory Buffer feature, for example, reserves a predetermined amount of RAM for a VM, just in case it requires more memory before the host’s RAM is rebalanced. And the Memory Priority setting designates which VMs receive additional memory first during periods of high RAM utilization.

How to monitor virtual memory with Hyper-V Dynamic Memory
If there isn’t enough RAM to go around, Dynamic Memory will shift it to the high-priority VMs. That can hurt the performance of less-important VMs if proper monitoring isn’t in place. But you don’t have to wait for user complaints to roll in before you take action. The Hyper-V Manager Console can monitor virtual memory settings with two new reports.

Dynamic Memory best practices
Dynamic Memory requires manual configuration of the Memory Buffer and Memory Priority settings. It’s also a Dynamic Memory best practice to provide Startup RAM and Maximum RAM numbers. The Startup RAM refers to the amount of memory a VM uses to boot, and the Maximum RAM is the highest amount of memory that Hyper-V R2 SP1 allocates to a VM.

Hyper-V Dynamic Memory vs. VMware memory overcommit
Hyper-V Dynamic Memory and VMware memory overcommit address dynamic memory allocation in different ways. With memory overcommit, users can allocate more memory to virtual machines than a host has available. In Hyper-V R2 SP1, Dynamic Memory continually rebalances the host memory, according to parameters set by the administrator. But it can’t allocate more memory than the host has available.

                                      REMOTEFX IN HYPER-V R2 SP1

The infrastructure requirements for RemoteFX are restrictive, to say the least. RemoteFX, for example, can stream only Windows 7 SP1 virtual desktops, so IT shops with Windows XP virtual desktops are out of luck. Also, you need a Hyper-V R2 SP1 back end, which means other virtualization platforms cannot run RemoteFX.

What you need to know about Microsoft RemoteFX
Microsoft RemoteFX is a powerful protocol, designed to make the virtual desktop experience almost indistinguishable from using a local machine. To use RemoteFX, however, you must meet Microsoft’s strict requirements. So read the fine print.

Comparing Microsoft RemoteFX to VMware PCoIP
Microsoft RemoteFX and VMware PCoIP are similar virtual desktop technologies. Both protocols stream desktops to the users, with the hosts handing the processing on the back end. But RemoteFX require the hosts to have a GPU add-in card. PCoIP, on the other hand, can run on normal hardware, but performance can suffer if users run multimedia-intensive applications.

The differences between Microsoft RemoteFX and Citrix HDX
Comparing Microsoft RemoteFX and Citrix HDX is not apples to apples. For one, HDX works on a wide variety of platforms and hardware, unlike RemoteFX, which has strict software and hardware requirements. Additionally, RemoteFX works only on LANs. To stream desktops across a wide area network, you’ll need to use RDP, which doesn’t perform as well as HDX.

The power and promise of RemoteFX
Microsoft’s streaming technologies have come a long way since the Terminal Services days. RemoteFX supports several advanced codecs — that is, a device or program that can encode and/or decode a digital data stream that provide a richer user experience. It also provides USB redirection, the use of USB peripherals on virtual desktops, with no client-side drivers to load.

How to tell if you’re actually using RemoteFX
If you have the proper configuration, it’s easy to enable the RemoteFX role under Remote Desktop Services. But how can you tell if you’re actually using RemoteFX? Well, there are certain clues — such as Start menu options and Event Viewer prompts — that will let you know for sure.

MORE ON HYPER-V R2 SP1

Windows Server 2008R2 SP1& Windows 7 SP1 brings maturity, but not much else

February 16, 2011 Leave a comment

The first service packs for Windows Server 2008 R2, Windows 7 and Hyper-V R2 will be released to customers later this month, giving IT pros support for more Windows 7 guests in Hyper-V, memory over-commit and the Remote FX desktop virtualization protocol.

Microsoft said today that both service packs are now available for OEM partners and will be available for customers on February 22. The company also added a Software Assurance benefit, called Windows Thin PC, which is a smaller version of Windows 7 for IT shops that may want to repurpose PCs as thin client devices.

Many IT pros consider the first service pack the point when it is safe to upgrade to a new product, so the official release of these upgrades is significant. Microsoft released the Windows SP1 betas during TechEd in June 2010 and the second betas became available in July.

The SP1 for Windows 7 doesn’t include any new features. It is simply a combination of security updates and hot-fixes for bugs that are already available through Windows Update.

However, SP1 is still an important milestone, because once a service pack is ready, the previous release moves closer to its end of life. In this case it’s the original Windows 7 release, said Michael Silver, Gartner Inc.’s Mobile and Client Computing analyst.

“Once SP1 ships, there are only 24 months to deploy it before security fixes are discontinued for SP0,” Silver said. “Many organizations recently got bitten by the end-of-support for XP SP2 and had to pay Microsoft Custom Support — $200,000 to $500,000 for one year – because they never moved to SP3.  Therefore, all organizations need to plan to deploy Win7 SP1 and have it done by 24 months after it ships.”

Because SP1 doesn’t bring significant improvements, Microsoft had been telling customers not to wait for the first service pack before upgrading to Windows 7.  

A number of customers took that advice, as Windows 7 accounted for 20% in global usage of operating systems share in December 2010, up 1.18% from November.  Windows XP still had 56.72% market share in December, and Windows Vista only had 12.11% that month, according to NetMarketShare, an Internet technology statistics website run by Net Applications.

Windows Server 2008 R2 SP1
The first service pack for Windows Server is significant, particularly for desktop virtualization users who will want Windows Server 2008 R2 SP1. This release includes Dynamic Memory, Microsoft’s new virtual machine memory management feature, and the VDI remote protocol RemoteFX.

RemoteFX is essentially a set of Remote Desktop Protocol technologies that deliver videos and graphics to virtual desktops. It’s similar to Citrix Systems’ HDX technology and VMware’s PCoIP.

Michael Cherry, an analyst at Directions on Microsoft, a Kirkland, Wash.-based consulting firm, said both Windows 7 and Windows Server 2008 R2 are good releases, and the SP1 appears to have been well tested, so IT pros shouldn’t hesitate deploying them.

What is Hyper-V Cloud Fast Track ?

November 24, 2010 1 comment

At TechEd Europe 2010 in Berlin, Microsoft introduced several new initiatives and some new solutions which enables customers to start using Cloud Computing.
Hyper-V Cloud Fast Track is a complete turn key deployment solution delivered from several server vendors which enables customers to quickly deploy cloud computing with reduced risk for technical issues by purchasing a virtualized infrastructure designed with best practices of Microsoft and the hardware vendor. Customers can build the infrastructure themselves based on reference architecture or use one of the many partners of the server vendor.

The solution is based on Microsoft best practices and design principles for Hyper-V and SCVMM and on partner best practices and design principles for the part of the solution deliverd by the partner (storage hardware, blades, enclosure, rack mounted, server, networking etc)
Some parts of the architecture are required by Microsoft  (redundant nics, iSCSI for clustering at the virtual machine level) and some are recommended. There is enough room for server vendors to create added value by delivering their software solution with the Fast Track.

The solution is targeted at large infrastructures running at least 1000 virtual machines per cluster. So it is an enterprise solution, not targeted at small and medium businesses.

This posting is a detailed summary of session VIR201 ‘Hyper-V Cloud Fast Track ‘ given at TechEd Europe 2010. The session can be seen and listened to via this link.

Cloud Computing is the next revolution in computing. Once every 5 to 10 years there is a dramatic change in IT-landscape. It all started with mainframes and dumb terminals, we got stand alone PC’s. Then we got desktops connected to servers, we got Server Based Computing, Software as a service, virtualization and now (private)cloud computing.

Cloud Computing delivers new exciting features to the business consuming IT-services making it possible to quickly respond to new businesses. Self service portals enables business units to send change requests (for new virtual machines, additional storage and computing resources) using Webbased portals. After the request has been approved by the IT-department resources like virtual machines, CPU, memory or storage are automatically provisioned.

On the producing site (the IT-department) cloud computing delivers functionality to keep control over the life cycle of virtual machines, be able to forecast the need for additional resources, monitor and respond to alarms, report  and be able to chargeback costs of computing to the consumer.

If an organization decides to build a private cloud, three options are possible.
Either build the cloud computing infrastructure yourself on purchased hardware and software which is located on-premises.
Another option is to use the services of a Hyper-V Cloud Service Provider. Servers are located in an off-premises datacenter, the service provider makes sure networking, storage and computing power is provided. They also make sure the software is able to deliver Cloud computing  functions like charge back, self service portal and is ready to use. While doing it yourself  it takes the longest time to implement, using a service provider is the shortest time to implement.

There is a third option which is between doing it yourself and outsouring: Hyper-V Cloud Fast Track. This is a set of Microsoft validated blueprints and best practices developed by Microsoft Consulting Services and 6 server vendors. Those 6 represent over 80% of the server market. Instead of re-inventing the wheel by an organization wanting to jump on cloud computing, proven technology can be obtained from 6 hardware vendors (Dell, HP, IBM, Fujitsu, NEC and Hitachi). See for more info the Microsoft site
The technology is a set of hardware (servers and storage, software (Hyper-V/SCVMM and Self Service Portal 2.0) and services (experience and knowledge delivered by the hardware vendor).

Choosing for Hyper-V Cloud Fast Track solution has a couple of advantages:
reduce time to deploy. The hardware vendor has a selected number of configurations and best practices which is proven technology. It is ready to be installed without having to spend much time on inventory and design .
-reduce risk. The configurations are validated by the vendor to work. No risk on issues of components not working together. Performance is as designed and as expected.
-flexibility and choice. Several configurations can be chosen. Dell for example offers iSCSI storage, Fiber channel storage , blades and rack servers configurations.

See a video of the Dell Hyper-V Cloud Fast Track solution.

To me at the moment Hyper-V Fast Track seems to be more marketing related to impress the world about the solutions Microsoft can deliver for cloud computing. Microsoft is far behind VMware in it’s function offering for Infrastructure As A Service (IAAS). ESX is superieur to Hyper-V in being a hypervizor. The same accounts for vCenter Server for management versus System Center Virtual Machine Manager. Self Service Portal 2.0 far behind with functionality compared to VMware vCloud Director and additional software like vShield App.
While VMware has always been good in delivering superieur technology in it’s features (vMotion, storage vMotion) which appeals to IT-technicians, Microsoft has always been very good a luring IT-decision makers and higher management with perfect marketing material and distracting the functional shortcomings.

The website of Fujitsu, IBM, Hitachi and NEC only mention Hyper-V Fast Track but there is no reference architecure or detailed information to be found on the site.
Dell has four reference architectures available for download on their website, but none of them even mentions the VMM Self Service Portal 2.0 software! Delivering a self service portal to business units is what cloud computing distinguishes from server virtualization.  It is even a requirement for Hyper-V Cloud Fast Track!
I guess it only takes time before most of the 6 server vendors offer a really private cloud computing reference architecture.

The Hyper-V Cloud Fast Track solution consists of Hyper-V, System Center and Partner software technology. It is an open solution, the partner is free to add software solutions of its own (like management software).

One of the design principles for  hardware used in the Hyper-V Cloud Fast Track is that components and access to network and storage must be redundant. Each server needs to have multiple nics in a team. For iSCSI connections, at least 2 10 GBe nics or HBA’s are recommended. For the storage path MPIO most be used. VLAN trunks needs to be used to be able to split different type of networks and have control over the bandwidth usage of each network type by capping the bandwidth based on priorities. iSCSI traffic most likely wil be given more bandwidth than Live migration traffic. On a 10 GB pipe, iSCSI will typically get 5 GB while Live migration perfecly runs on 1 GB.

Although both iSCSI and Fiber Channel storage can be used, iSCSI storage is always required in the Fast Track solution as part of the solution. That is because clustering needs to be provided at the individual virtual machine level. Clustering at the host level (which ensures a VM is restarted on a remaining host if a host fails) is not enough to provide redundancy for cloud computing. Disk volumes inside a virtual machine can only be made available to multiple virtual machines using iSCSI. There is no such thing as a virtual Fiber Channel HBA in Hyper-V virtual machines.

If using a converged network, Quality of Service needs to be used to make sure certain types of network traffic can be priortized to make sure the virtual machines gets the guaranteed performance.

Management is an important part of the Hyper-V  Cloud Fast Track. Continious availability is very important aspect of cloud computing. To deliver that, the infrastructure needs to be monitored. If a failure is about to happen, actions need to be taken automatically to prevent downtime. For example, if the temperature in a Hyper-V server gets too high, System Center Operations Manager will notice that and initiate a Live migration of all virtual machines running on that host.

For file systems, Clustered Shared Volumes can be used, but also Melio FS for example. The server vendor delivering the Hyper-V Cloud Fast Track is free in selecting the cluster aware file system.

At the Microsoft.com Private Cloud website a lot more of information can be found, like

New features in System Center Virtual Machine Manager 2012

November 24, 2010 1 comment

At TechEd 2010 held in November 2010 in Berlin Microsoft announced lots of new solutions which are all targeted at cloud computing. We saw news on the Software As a Service (Saas)  side (Office 365) and Platform as a Service PaaS (Azure). Microsoft is putting A LOT of effort in private cloud computing of which Hyper-V and SCVMM is the foundation.

For more information on Microsoft Hyper-V Cloud Fast Track see this . Fast Track is a program similar to VMware, Cisco and EMC Vblock. It is a turn key solution with storage, server hardware, networking, hypervisor, management based on proven technologies and best practices to enable customers a quick and risk less deployment of private cloud computing.

In Berlin new features of System Center Virtual Machine Manager  2012 (SCVMM) were presented. It will have a lot more features than the current version SCVMM 2008 R2.
SCVMM 2012 will be used to manage a private clouds. It will be able to do delegation of administration, quota’s, self service, capacity and capability management, deployment, network and storage management and a lot more.

SCVMM 2012 will add support for management of Citrix XenServer and ESX 4.1. This adds to the current hypervisor support of Hyper-v, ESX 3.x and ESX 4.0 and Virtual Server.

Also bare metal deployment of Hyper-V hosts will be added. A new server will boot from PXE, download a WinPE image, download a VHD, join a domain and install the Hyper-V role all automated and orchestrated from SCVMM 2012.

In this article I will focus on the resource optimization. At the moment SCVMM 2008 needs to be integrated with SCOM 2007  to be able to dynamically load balance virtual machines over Hyper-V hosts. Integration between SCOM and SCVMM is quite difficult to establish. Especially compared to VMware vCenter Server which offers DRS and is easy to configure.

Dynamic Optimization (DO) is the Microsoft feature equal to  VMware’s Distributed Resource Scheduler (DRS). What DO does is balance workloads over the Hyper-V nodes in a cluster based on resource needs and available resources (computing, storage and network). It does this automatic based on user defined policies controlling frequency and aggressiveness. It can also be manually used.
There will be no dependancy on Operations Manager (SCOM). That is good news as currently SCVMM is dependent on SCOM which is difficult to configure.

Enhanced Placement is a feature to determine on which host a workload will run. It has new placement algorithms. There are over 100 placement checks and validations. Also rules to determine placement can be customary made by administrators. It allows multiple VM’s placement such that multiple VM’s with dependencies are placed in such way communication between the VM’s is optimized. This is the same function as VMware’s DRS affinity rule.
SCVMM 2012 will have a Power Management feature, similar to the Distributed Power Management (DPM) feature offered by VMware. SCVMM will power down Hyper-V hosts during times of low utilization. It will use Dynamic Optimization and use Live Migration to balance workloads without interruption. Administrators can create policies to have control over placement and are able to schedule consolidations.