Archive for November, 2010

Exchange 2010 Hosting

November 29, 2010 2 comments

This topic is intended to address a specific issue called out by the Exchange Server Analyzer Tool. You should apply it only to systems that have had the Exchange Server Analyzer Tool run against them and are experiencing that specific issue. The Exchange Server Analyzer Tool, available as a free download, remotely collects configuration data from each server in the topology and automatically analyzes the data. The resulting report details important configuration issues, potential problems, and non default product settings. By following these recommendations, you can achieve better performance, scalability, reliability, and uptime.

When Microsoft® Exchange 2010 Setup is started by using the /Hosting option, Microsoft Exchange Best Practices Analyzer examines the Active Directory directory service to determine whether Active Directory has been prepared for a hosting Exchange environment.

Specifically, the Analyzer tool performs the following examinations:

  • It determines whether the Microsoft Exchange container is present in the Configuration container in Active Directory. If the Microsoft Exchange container exists, Active Directory is prepared for Exchange.
  • It determines whether a ConfigurationUnits container is present in the Microsoft Exchange container. The ConfigurationUnits container appears when Active Directory is prepared for hosting.

If the ConfigurationUnits container is not present in the Microsoft Exchange container, the hosting installation is unsuccessful.

To prepare Active Directory for Exchange 2010 hosting, you must run the following command: /PrepareAD /Hosting

If Active Directory has been prepared for Exchange but not for hosting, you must perform the following actions:

  1. Remove any objects in the following containers:
    • CN=Microsoft Exchange/CN=Services
    • CN=Microsoft Exchange/CN=ConfigurationUnits
  2. Run the /PrepareAD /Hosting command.
  3. Restart Setup by using the /Hosting option.

View the Microsoft Exchange container in Active Directory

  1. On a domain controller, click Start, click Run, type adsiedit.msc, and then click OK.
  2. Expand the Configuration container.
  3. Expand CN=Configuration,DC=Contoso,DC=com.
  4. Expand CN=Services.
  5. Expand CN=Microsoft Exchange.

For more information about how to prepare Active Directory, see Prepare Active Directory and Domains.


Migrating from Exchange 2003 to Exchange 2010

November 25, 2010 1 comment

I mentioned I would provide more details on the steps needed to migrate from Exchange 2003 to Exchange 2010.  In this post, I’m going to outline the sequence and provide tips, tricks, and best practices to look forward to in the migration process.
Since the migration from Exchange 2003 to Exchange 2010 is similar if not almost identical to the process of migrating from Exchange 2003 to Exchange 2007, if you have already done your design and plan to migrate to Exchange 2007, you’ll find the process to be similar for getting you to Exchange 2010.
The sequence for a migration from Exchange 2003 to Exchange 2010 is as follows:
1.Bring the Exchange organization to Exchange Native Mode.
2.Upgrade all Exchange 2003 Servers to Exchange Server 2003 Service Pack 2.
3.Bring the AD forest and domains to Windows Server 2003 Functional (or higher) levels.
4.Upgrade at least one Global Catalog domain controller in each AD Site that will house Exchange Server to Windows Server 2003 SP1 or greater.
5.Prepare a Windows Server 2008 (RTM or R2) x64 edition server for the first Exchange 2010 server.
6.Install the AD LDIFDE tools on the new Exchange 2010 server (to upgrade the schema).
7.Install any necessary prerequisites (WWW for CAS server role).
8.Run setup on the Exchange 2010 server, upgrade the schema, and prepare the forest and domains. (Setup runs all in one step or separate at the command line.)
9.Install CAS server role servers and configure per 2010 design. Validate functionality.
10.Transfer OWA, ActiveSync, and Outlook Anywhere traffic to new CAS servers.
11.Install Hub Transport role and configure per 2010 design.
12.Transfer inbound and outbound mail traffic to the HT servers.
13.Install Mailbox servers and configure Databases (DAG if needed).
14.Create public folder replicas on Exchange 2010 servers using pfmigrate.wsf script, AddReplicatoPFRecursive.ps1,or Exchange 2010 Public Folder tool.
15.Move mailboxes to Exchange Server 2010 using Move Mailbox Wizard or Powershell.
16.Rehome the Offline Address Book (OAB) generation server to Exchange Server 2010.
17.Rehome Public Folder Hierarchy on new Exchange Server 2010 Admin Group.
18.Transfer all Public Folder Replicas to Exchange Server 2010 Public folder store(s).
19.Delete Public and Private Information Stores from Exchange 2003 server(s).
20.Delete Routing Group Connectors to Exchange Server 2003.
21.Delete Recipient Update Service agreements using ADSIEdit.
22.  Uninstall all Exchange 2003 servers.
Key to note in the migration process from Exchange 2003 to Exchange 2010 is that many concepts go away such as the concept of routing groups and administrative groups.  Routing groups and Administrative groups were legacy from the days prior to Active Directory where Exchange 5.5 or earlier needed to have configuration settings to allow for the creation of administrators and the routing of mail.  These concepts were brought forward in Exchange 2000 and continued with Exchange 2003, however with Exchange 2007, Microsoft did away with these concepts and began leveraging the administrative roles and the Sites and Services routing roles built in to Active Directory.  With the elimination of administrative groups and routing groups, a major tip is to make sure your Active Directory is setup and working properly.  The requirement to be in Active Directory 2003 native mode is that instead of distribution lists in Exchange, Exchange 2010 uses Universal Groups in Active Directory.  Where in the past we used to create global security groups in Active Directory, now you want any group that’ll be mail-enabled for Exchange to be a universal group.  And for the proper routing or mail, make sure that your Active Directory Sites and Services is setup properly regarding subnets and site links between subnets so that mail between different Exchange servers will properly follow the shortest or fastest path designated in AD Sites and Services.
Also important to note is that all roles (bridgehead, frontend, and backend) in Exchange 2003 need to remain until all users are migrated to Exchange 2010.  Exchange 2010 CAS, Hub Transport, and Mailbox servers are not backwards compatible with Exchange 2003, so in order for a user to access Outlook Web Access on Exchange 2003, they need to still hit the Exchange 2003 frontend and access their mailbox on the Exchange 2003 backend server.  After their mailbox is migrated to Exchange 2010, then the user will hit the Exchange 2010 CAS server and access their mailbox on the Exchange 2010 Mailbox server.  Because Exchange 2010 has a proxy service on the CAS server, your external URL for OWA can point to the Exchange 2010 CAS server and if the user’s mailbox is still on Exchange 2003, the CAS/2010 server will automatically redirect the client connection to the FE/2003 server for OWA.

Lastly, after moving mailboxes off of Exchange 2003 to Exchange 2010, leave the Exchange 2003 infrastructure in place for a couple (2) weeks.  By leaving the old Exchange 2003 server(s) in place, when an Outlook client tries to connect to the old Exchange 2003 server for its mail, the old Exchange 2003 server will notify the Outlook client software that the user’s mail has been moved to the Exchange 2010 server and will automatically update the user’s Outlook profile with the new destination server information.  Thereafter, when the Outlook client is launched, Outlook will access the user’s mailbox on the new Exchange 2010 server.  By leaving the old Exchange 2003 infrastructure in place for a couple weeks, pretty much all of your users will launch Outlook to have the profile automatically changed thus requiring no client system intervention during the migration process.  The only users you will likely need to manually reset their Outlook profile are users who are on extended leave and had not accessed their Outlook mail during the 2 week time that you had the Exchange 2003 environment still in place.

Hopefully these steps are helping in providing you guidance in your migration from Exchange 2003 to Exchange 2010.

What is Hyper-V Cloud Fast Track ?

November 24, 2010 1 comment

At TechEd Europe 2010 in Berlin, Microsoft introduced several new initiatives and some new solutions which enables customers to start using Cloud Computing.
Hyper-V Cloud Fast Track is a complete turn key deployment solution delivered from several server vendors which enables customers to quickly deploy cloud computing with reduced risk for technical issues by purchasing a virtualized infrastructure designed with best practices of Microsoft and the hardware vendor. Customers can build the infrastructure themselves based on reference architecture or use one of the many partners of the server vendor.

The solution is based on Microsoft best practices and design principles for Hyper-V and SCVMM and on partner best practices and design principles for the part of the solution deliverd by the partner (storage hardware, blades, enclosure, rack mounted, server, networking etc)
Some parts of the architecture are required by Microsoft  (redundant nics, iSCSI for clustering at the virtual machine level) and some are recommended. There is enough room for server vendors to create added value by delivering their software solution with the Fast Track.

The solution is targeted at large infrastructures running at least 1000 virtual machines per cluster. So it is an enterprise solution, not targeted at small and medium businesses.

This posting is a detailed summary of session VIR201 ‘Hyper-V Cloud Fast Track ‘ given at TechEd Europe 2010. The session can be seen and listened to via this link.

Cloud Computing is the next revolution in computing. Once every 5 to 10 years there is a dramatic change in IT-landscape. It all started with mainframes and dumb terminals, we got stand alone PC’s. Then we got desktops connected to servers, we got Server Based Computing, Software as a service, virtualization and now (private)cloud computing.

Cloud Computing delivers new exciting features to the business consuming IT-services making it possible to quickly respond to new businesses. Self service portals enables business units to send change requests (for new virtual machines, additional storage and computing resources) using Webbased portals. After the request has been approved by the IT-department resources like virtual machines, CPU, memory or storage are automatically provisioned.

On the producing site (the IT-department) cloud computing delivers functionality to keep control over the life cycle of virtual machines, be able to forecast the need for additional resources, monitor and respond to alarms, report  and be able to chargeback costs of computing to the consumer.

If an organization decides to build a private cloud, three options are possible.
Either build the cloud computing infrastructure yourself on purchased hardware and software which is located on-premises.
Another option is to use the services of a Hyper-V Cloud Service Provider. Servers are located in an off-premises datacenter, the service provider makes sure networking, storage and computing power is provided. They also make sure the software is able to deliver Cloud computing  functions like charge back, self service portal and is ready to use. While doing it yourself  it takes the longest time to implement, using a service provider is the shortest time to implement.

There is a third option which is between doing it yourself and outsouring: Hyper-V Cloud Fast Track. This is a set of Microsoft validated blueprints and best practices developed by Microsoft Consulting Services and 6 server vendors. Those 6 represent over 80% of the server market. Instead of re-inventing the wheel by an organization wanting to jump on cloud computing, proven technology can be obtained from 6 hardware vendors (Dell, HP, IBM, Fujitsu, NEC and Hitachi). See for more info the Microsoft site
The technology is a set of hardware (servers and storage, software (Hyper-V/SCVMM and Self Service Portal 2.0) and services (experience and knowledge delivered by the hardware vendor).

Choosing for Hyper-V Cloud Fast Track solution has a couple of advantages:
reduce time to deploy. The hardware vendor has a selected number of configurations and best practices which is proven technology. It is ready to be installed without having to spend much time on inventory and design .
-reduce risk. The configurations are validated by the vendor to work. No risk on issues of components not working together. Performance is as designed and as expected.
-flexibility and choice. Several configurations can be chosen. Dell for example offers iSCSI storage, Fiber channel storage , blades and rack servers configurations.

See a video of the Dell Hyper-V Cloud Fast Track solution.

To me at the moment Hyper-V Fast Track seems to be more marketing related to impress the world about the solutions Microsoft can deliver for cloud computing. Microsoft is far behind VMware in it’s function offering for Infrastructure As A Service (IAAS). ESX is superieur to Hyper-V in being a hypervizor. The same accounts for vCenter Server for management versus System Center Virtual Machine Manager. Self Service Portal 2.0 far behind with functionality compared to VMware vCloud Director and additional software like vShield App.
While VMware has always been good in delivering superieur technology in it’s features (vMotion, storage vMotion) which appeals to IT-technicians, Microsoft has always been very good a luring IT-decision makers and higher management with perfect marketing material and distracting the functional shortcomings.

The website of Fujitsu, IBM, Hitachi and NEC only mention Hyper-V Fast Track but there is no reference architecure or detailed information to be found on the site.
Dell has four reference architectures available for download on their website, but none of them even mentions the VMM Self Service Portal 2.0 software! Delivering a self service portal to business units is what cloud computing distinguishes from server virtualization.  It is even a requirement for Hyper-V Cloud Fast Track!
I guess it only takes time before most of the 6 server vendors offer a really private cloud computing reference architecture.

The Hyper-V Cloud Fast Track solution consists of Hyper-V, System Center and Partner software technology. It is an open solution, the partner is free to add software solutions of its own (like management software).

One of the design principles for  hardware used in the Hyper-V Cloud Fast Track is that components and access to network and storage must be redundant. Each server needs to have multiple nics in a team. For iSCSI connections, at least 2 10 GBe nics or HBA’s are recommended. For the storage path MPIO most be used. VLAN trunks needs to be used to be able to split different type of networks and have control over the bandwidth usage of each network type by capping the bandwidth based on priorities. iSCSI traffic most likely wil be given more bandwidth than Live migration traffic. On a 10 GB pipe, iSCSI will typically get 5 GB while Live migration perfecly runs on 1 GB.

Although both iSCSI and Fiber Channel storage can be used, iSCSI storage is always required in the Fast Track solution as part of the solution. That is because clustering needs to be provided at the individual virtual machine level. Clustering at the host level (which ensures a VM is restarted on a remaining host if a host fails) is not enough to provide redundancy for cloud computing. Disk volumes inside a virtual machine can only be made available to multiple virtual machines using iSCSI. There is no such thing as a virtual Fiber Channel HBA in Hyper-V virtual machines.

If using a converged network, Quality of Service needs to be used to make sure certain types of network traffic can be priortized to make sure the virtual machines gets the guaranteed performance.

Management is an important part of the Hyper-V  Cloud Fast Track. Continious availability is very important aspect of cloud computing. To deliver that, the infrastructure needs to be monitored. If a failure is about to happen, actions need to be taken automatically to prevent downtime. For example, if the temperature in a Hyper-V server gets too high, System Center Operations Manager will notice that and initiate a Live migration of all virtual machines running on that host.

For file systems, Clustered Shared Volumes can be used, but also Melio FS for example. The server vendor delivering the Hyper-V Cloud Fast Track is free in selecting the cluster aware file system.

At the Private Cloud website a lot more of information can be found, like

New features in System Center Virtual Machine Manager 2012

November 24, 2010 1 comment

At TechEd 2010 held in November 2010 in Berlin Microsoft announced lots of new solutions which are all targeted at cloud computing. We saw news on the Software As a Service (Saas)  side (Office 365) and Platform as a Service PaaS (Azure). Microsoft is putting A LOT of effort in private cloud computing of which Hyper-V and SCVMM is the foundation.

For more information on Microsoft Hyper-V Cloud Fast Track see this . Fast Track is a program similar to VMware, Cisco and EMC Vblock. It is a turn key solution with storage, server hardware, networking, hypervisor, management based on proven technologies and best practices to enable customers a quick and risk less deployment of private cloud computing.

In Berlin new features of System Center Virtual Machine Manager  2012 (SCVMM) were presented. It will have a lot more features than the current version SCVMM 2008 R2.
SCVMM 2012 will be used to manage a private clouds. It will be able to do delegation of administration, quota’s, self service, capacity and capability management, deployment, network and storage management and a lot more.

SCVMM 2012 will add support for management of Citrix XenServer and ESX 4.1. This adds to the current hypervisor support of Hyper-v, ESX 3.x and ESX 4.0 and Virtual Server.

Also bare metal deployment of Hyper-V hosts will be added. A new server will boot from PXE, download a WinPE image, download a VHD, join a domain and install the Hyper-V role all automated and orchestrated from SCVMM 2012.

In this article I will focus on the resource optimization. At the moment SCVMM 2008 needs to be integrated with SCOM 2007  to be able to dynamically load balance virtual machines over Hyper-V hosts. Integration between SCOM and SCVMM is quite difficult to establish. Especially compared to VMware vCenter Server which offers DRS and is easy to configure.

Dynamic Optimization (DO) is the Microsoft feature equal to  VMware’s Distributed Resource Scheduler (DRS). What DO does is balance workloads over the Hyper-V nodes in a cluster based on resource needs and available resources (computing, storage and network). It does this automatic based on user defined policies controlling frequency and aggressiveness. It can also be manually used.
There will be no dependancy on Operations Manager (SCOM). That is good news as currently SCVMM is dependent on SCOM which is difficult to configure.

Enhanced Placement is a feature to determine on which host a workload will run. It has new placement algorithms. There are over 100 placement checks and validations. Also rules to determine placement can be customary made by administrators. It allows multiple VM’s placement such that multiple VM’s with dependencies are placed in such way communication between the VM’s is optimized. This is the same function as VMware’s DRS affinity rule.
SCVMM 2012 will have a Power Management feature, similar to the Distributed Power Management (DPM) feature offered by VMware. SCVMM will power down Hyper-V hosts during times of low utilization. It will use Dynamic Optimization and use Live Migration to balance workloads without interruption. Administrators can create policies to have control over placement and are able to schedule consolidations.

Microsoft challenges VMware vCloud Director with SCVMM 2012

November 22, 2010 4 comments

First came VMware’s vCloud Director. This “manager of all managers” suite aims to enable the move toward private cloud computing with a centralized management layer. Now, Microsoft is poised to compete in the private cloud computing market as well with the latest version of System Center Virtual Machine Manager (SCVMM).

Due out in the second half of 2011, SCVMM 2012 is still in the early stages of beta testing. But its additions may bring greater feature parity between VMware’s vCloud Director and Virtual Machine Manager– as well as support for Citrix Systems’ XenServer. And users see promise in it. New features demonstrated at the company’s recent TechEd conference in Berlin include new administrative roles and workflows for self-service portals, which are meant to support Infrastructure as a Service (IaaS), and new support for automated, wizard-driven provisioning of server, network and storage hardware for virtual machine deployments through Virtual Machine Manager.

In addition, Microsoft is introducing a feature it calls Dynamic Optimization and Power Management, akin to VMware Distributed Resource Scheduler (DRS) and Distributed Power Management (DPM) for load balancing virtual machines across a cluster in response to performance or power requirements.

Beefed-up role-based IaaS
Currently, Microsoft’s SCVMM 2008 is limited to lifecycle management operations — creating, starting, stopping, reallocating, decommissioning and deleting — for Hyper-V and vSphere 4.0 virtual machines. SCVMM 2012 will add support for managing Citrix XenServer and vSphere 4.1 virtual machines. It will also add roles beyond the VMM Administrator to support IaaS.

[Automated] clustering is huge

According to the TechEd Europe session “System Center Virtual Machine Manager vNext: Service Lifecycle Management for the Private Cloud,” the new roles, which will be administered through Microsoft’s Active Directory, include a delegate VMM administrator; a cloud manager, a read-only admin role; and a self-service user, which will be able to see and act on different levels of cloud resources, based on their role.

Another alternative with SCVMM 2012 will be for the end user to create and share service or virtual machine templates that can be configured, limited or revoked by higher-level administrators. Microsoft already has a similar self-service portal tool on the market, SCVMM Self Service Portal (SSP) 2.0, but users say they hope the new self-service features will be easier to work with. “SSP right now is a little clunky,” without finer-grained roles and restrictions for different levels of admins and users, said Robert McShinsky, senior systems engineer at Dartmouth Hitchcock Medical Center in Lebanon, N.H. With SCVMM 2012, “we could use the multi-tiered console and different user roles to offer a full feature set but ratchet down what different people can do.”

Aidan Finn, an infrastructure team lead at System Dynamics, a Microsoft consulting company based in Ireland, said it’s unclear to him whether SSP 2.0 will be folded into SCVMM 2012. “My bet would be SCVMM 2012 will be self-contained,” he said. “If people want to build a private cloud right now, they can use SSP 2.0, but next year there will be a real evolution with VMM.” Microsoft did not respond to requests for clarification by press time.

Physical infrastructure provisioning for virtualization
According to the TechEd SCVMM vNext session “Fabric Management for the Private Cloud,” SCVMM 2012 will add two more roles at the delegate administrator level: network administrator and storage administrator. Thus, the new version of the software will be the first version of VMM to provision bare-metal server hardware, host clusters, load balancers, and network and storage resources for virtual machines.

With SCVMM 2012, according to the session, users will be able to automate OS and application deployment as virtual machines are created. Users can also use the SCVMM graphical user interface to create physical host clusters and connect their shared storage volumes.

“[Automated] clustering is huge,” said Seth Mitchell, infrastructure team manager for Slumberland Inc., which runs about 120 Hyper-V guests and uses SCVMM”heavily.” “It’s one of the areas we spend more time on than we’d like to.”

Storage provisioning will include the automated creation of shared storage volumes and their attachment to host clusters, according to a panel of Microsoft officials who spoke at the TechEd session. The presentation specified that SCVMM 2012 will connect with storage arrays through the SMI-S standard, which is an inconsistently applied standard in the storage world.

vCloud Director looks ‘a lot like Virtual Machine Manager 2012,’ says a user.

It’s an interesting concept, but it will be interesting to see how [the storage integration] plays out in practice. There are fairly granular things we do with storage on our clusters, and if we can’t do them, that part might not get used,” Mitchell said. Slumberland uses storage arrays from Compellent Technologies Inc., which have automated tiered-storage features Mitchell said he doubts can be integrated using SMI-S alone, but Mitchell hopes Compellent will instead offer integration based on PowerShell scripts.

On the networking front, SCVMM 2012 will be able to attach virtual machines to load balancers from F5 Networks and Citrix Systems Inc. automatically, complete with customizable virtual IP templates that instruct the load balancer which protocols and load-balancing methods should be used, as well as settings for persistence and health monitoring.

IP addresses, MAC addresses and virtual IP (VIP) addresses will be pooled and provisioned by network administrators through a new SCVMM 2012 feature called Logical Networks, which allows network administrators to create and monitor a network connectivity service catalog. With address pooling, IP and VIP addresses are “checked out” and matched with static MAC addresses when virtual machines are created, and checked back in when virtual machines are deleted or decommissioned.

It’s good to see Microsoft beefing up these capabilities, McShinsky said, but it’s unclear yet how SCVMM will fit in with existing provisioning tools his shop has running, such as Symantec’s Altiris. “It’s another thing to coordinate within the data center,” he said.

Upping the ante with VMware
Of all the new enhancements to SCVMM 2012, McShinsky said he’s most interested in Dynamic Optimization and Power Management, which will consist of a series of PowerShell extensions to perform distributed resource scheduling using live migrations or to migrate virtual machines in groups for power management efficiency.

This isn’t a net-new capability for SCVMM, but it’s much improved, McShinsky said. “Previously, Operations Manager had a connector that you could use to move things around in some ways, but it involved a separate infrastructure with Operations Manager, which is a bear in itself,” he said. “Having that [utility] baked into the tool it’s meant for is a real step in the right direction.”

System Dynamics’ Finn also notes that TechEd also saw the announcement of elastic migration features to come for Windows Azure, Microsoft’s cloud computing platform, which closely match the model VMware calls a “hybrid cloud.” Finn, who attended VMworld, said he remembered thinking “that vCloud Director looked a lot like VMM 2012.”

Mitchell said the new release could “help people like me try to position Hyper-V as real for business use. I’m happy to see Microsoft get into this and become more competitive.”

For now, however, VMware’s vCloud Director is already shipping, while SCVMM 2012 remains theoretical until next year. And regardless of which IaaS tools they use, users face other challenges in building private clouds.

“The [Microsoft] tools are definitely maturing, and tying in to a lot of what VMware has talked about in terms of managing the whole data center and clouds,” said McShinsky, “But we’ll have to see where it ends up.”

Deploying Forefront Client Security Using SCCM 2007 – Step-By-Step

November 18, 2010 6 comments

This is a Step-By-Step guide for using SCCM2007 to Deploy Forefront Client Security Client Agents.


1. Installed and configured FCS management server.

2. FCS Policy configured and deployed on client machines.

3. Windows Update policy Configured and deployed on client machines.

4. Client Installation Files (the Client directory on the installation CD) on a shared directory on the FCS server (only read permissions needed).

Creating the Installation Package

1. Open SCCM 2007 Console and then go to Computer Management -> Software Distribution -> and right click Packages -> New -> Package.

2. Configure all package details and click next.

3. On the Data Source tab, configure the data source as the file share you’ve created with the client setup files on the installation server. On the scheduling part, you can choose to leave it by default, or configure a schedule for updating the client package.
After finished with all the settings, click finish.
I’ve chosen 6 hours since I’m downloading the new definitions every days using a script and updating the installation package everyday to be installed with the newest definitions.

4. Now go back and expand the newly created package. The first thing we need to do is to configure a distribution point for the package. For that, right click the distribution points -> New Distribution points.

5. On the distribution points wizard, walk through the welcome screen and on to the Copy package window. Then select the specified distribution point you wish to distribute your package from (the default choice should be the SCCM server itself). Then click next and close.

6. The next phase is creating the program to run the clientsetup.exe. in order to that, go back to the SCCM console and expand the FCS package. Right click programs ->New -> Program.

7. On the general page, type a program name and comment and then configure the command line you need to run the clientsetup.exe with. It should be something like:
clientsetup.exe /CG ForefrontClientSecurity /MS
On the Run selection, I recommend using hidden in order not to disturb your users while deploying FCS.
Then click next.

8. On the requirements page, enter a 350MB disk space limit (the limitation by FCS pre-requisites). Then limit the platforms this program can run upon: since we are currently building a package using the x86 client agent version, we need to select only x86 platforms. In addition, we cannot select all x86 2000 and XP since the FCS client is limited to 2000SP4 and XPSP2, so pay attention and check only the proper platforms.
Then click next.

9. On the Environment page, choose that program can run whether or not the user is logged on (which automatically checks the “Run with administrative rights” option.
Note: you should have configured by the administrative account used to install programs. If not, you can find more information about configuring SCCM accounts on: .
Then Click next.

10. Go through the Adavanced, Windows Installer ,MOM Maintenance and summery pages and click close.
Note: you configure things you want under advanced or mom maintenance if you wish, but this is not necessary.

Note: The package with just created is used for installing the x86 client agent. In case you have x64 platforms in your domain you need to repeat the process and create a x64 package. Just pay attention when choose the running platforms, only select the x64 systems.

Creating a Task Sequence to Removing existing AV solution and Deploy FCS Package

1. Open SCCM 2007 Console and then go to Computer Management -> Operating System and right click Task Sequence -> New -> Task Sequence.

2. On the create new task sequence page, select “Create a new custom task sequence” and click next.

3. On the task sequence informatino page, type the task sequence name choose the x86 boot image (or x64 – depends on your client agent deployment). Then click next and close.

4. Now go back to the console and on the task sequence window, right click the newly created task sequence and select edit.

5. Now we create the task sequence that will run on the client.
Click Add-> General run command line.

6. Fill in the proper details and on the command line, write the full path to the removal script.
Some AV solutions require a reboot and won’t let anything else get installed on the system after removing them before your reboot the system.
If your case is one of those, then after adding the remove XXX task, click Add -> General Restart Computer.

7. Now we need to add the FCS deployment package. Click add -> General -> Install software

8. Now feel the name and description of the Installation task and select install single application, click browse and select the FCS package your created earlier.

9. This phase is optional, although I recommend working through it since this is one of the greatest added values of deploying FCS using SCCM.
After configuring the SCCM WSUS Distribution Point settings and syncing with Microsoft Update, you need to be able to see Forefront Updates (hotfixes) in the Software Update Deployment part of the SCCM console.
Go to Computer Management -> Software Updates -> Update Repository -> Updates -> Microsoft -> Forefront Client Security.

10. Select the Updates that relate to FCS and right click -> Deploy Software Updates. Make sure you choose only updates named “Update for Microsoft Forefront Client Security” and not the “Client Update for Microsoft Forefront Client Security”.

11. On the Software updates general page, type a name for the software update deployment and click next.

12. On the deployment template, click create new (unless you already have a deployment template you wish to use – then you can skip this step).

13. On the collection page, choose the collection where you wish to deploy forefront and click next.

14. On the Display/Time Settings, choose Suppress display notifications on client, client local time and set the deadline to 1 hour. Then click next.

15. On the Restart settings page, check the suppress restart on servers and workstation and click next.

16. Go through the Event Generation and Download Settings (leaving them in default settings) and on the create template, give a new name to the template and click next.

17. On the deployment Package page, name the newly created package and fill out the package source UNC (Specifies the location of the software update source files. When the deployment is generated, the source files are compressed and copied to the distribution points that are associated with the deployment package).
Note: The shared folder for the deployment package source files must be manually created before proceeding to the next page.

18. On the distribution points page, click browse and add your default Distribution point. Then click next.

19. On the download location page, choose from the internet and click next.

20. On the language selection page, select the relevant languages and click next.

21. Move thorugh the schedule, Nap evaluation and summery pages, and click close.

22. Now what we want to do is to add all the updates to the installation package and by that, making sure our clients are installed from the beginning with the most up-to-date version of all the client engines.
Go back to the task sequence you’ve created earlier and edit it. Click add -> General -> Install Software Updates.

23. Type the name for this task, choose mandatory software updates and click ok.
Note: another optional way of adding the updates to the package is downloading the update directly from Microsoft update catalog (, packaging them and adding them is an install software task in the task sequence.

Advertising the Task sequence

1. Go back to the SCCM console and right click the task sequence you created and choose advertise.

2. Fill the name and comment for the advertisement and choose the collection where you wish to distribute FCS. Then click next.

3. On the schedule page, select your preferred schedule for deployment. I usually work with “as soon as possible. Then click next.

4. On the distribution point page, select the Access content directly option and click next.

5. Go through the Interaction, Security and summery pages leaving everything in default settings and click close.

That’s it! You’ve deployed FCS using SCCM2007. Congratulations

Pros & Cons Exchange Server in the Cloud

November 18, 2010 10 comments

The decision to run exchange server in the cloud is not an easy one; for every advantage there is disadvantage. This tip goes beyond the hype to give you a realistic look at what to expect from a cloud-based Exchange Server deployment.

What’s good about Exchange Server in the cloud?

Two main advantages of running Exchange Server in the cloud involve cost and manageability.Cloud services are subscription-based, meaning that there are no upfront costs for server hardware or software licenses. Therefore, it costs much less to implement a cloud-based Exchange Server deployment that on premise.

In the past, the cost to deploy Exchange Server was prohibitive for most SMBs, who had the option to purchase the less-expensive Essential Business Server. Microsoft has since discontinued the product, though. Cloud-based Exchange subscriptions are typically priced on a per-mailbox basis; the average monthly cost of a hosted mailbox averages around $5, making it feasible for even the very small businesses. Hosting companies also provides a level of fault tolerance that was cost-prohibitive for smaller businesses.

Running Exchange Server in the cloud also lessens much of the administrative burden associated with managing Exchange. Service providers deal with ongoing tasks like patch management, server backups and meeting Microsoft’s constantly changing best practices. These companies also supply customers with proprietary, Web-based management tools, which are often easier to use than the Exchange Management Console (EMC) or Exchange Management Shell (EMS).

What’s bad about Exchange Server in the cloud?

The biggest criticism of running cloud-based Exchange is that servers are accessed over the Internet. If your Internet connection fails, Exchange becomes inaccessible. If Exchange was deployed on premise, an Internet connection failure would prevent email from traveling in and out of the organization, but users could still send mail to each other. They could also use their calendars, view contacts, etc.

The management tools that are considered a pro for some admins, can be a con for others.Administrators who have become comfortable using EMS and EMC may have trouble adapting to the Web-based management tools many hosting companies provide.

The management tools often are designed to prevent subscribers from managing certain aspects of the Exchange deployment. For example, a hosting provider may prevent subscribers from managing their own mailbox quotas.

And although a cloud-based Exchange Server deployment may simplify administration, it can actually complicate Active Directory administration. All versions of Exchange since Exchange 2000Server have depended on Active Directory; the AD requirement doesn’t just disappear when you run Exchange in the cloud. Organizations with an on-premise Directory likely will perform directory synchronization to the cloud .

Exchange in the cloud also has an inherent lack of flexibility. If you have a third-party anti-virus, anti spam or Exchange management product that you want to use, for example, you’ll have to ditch that product when you move to the cloud.

Finally, when you outsource Exchange to a cloud service provider, your data is stored on the host Exchange servers, putting it out of your direct control. It is the hosting company — not you — who retains data backups.