Moving a vSAN cluster from one vCenter Server to another
I have running vSAN enabled cluster on DEMOVCSA08.ads.com vCenter
Now I am going to migrate on new vCenter DEMOVCSA01,
Both vCenter are running the same version 8.0, always choose target vCenter either same version as source or later.
Phase 1
Perform Pre check
Collect vSAN network list
esxcli vsan network list
Run below command one member ESXI host to get vSAN cluster
esxcli vsan cluster get
It is important to note the number of cluster members listed here.
esxcli vsan health cluster list
There is some alert with my vSAN due to resource limitation. I keep down member node and run with limited member.
Phase 2
Preparation of the target vCenter instance
The target vCenter must have at least the same build level as the source vCenter or higher.
2. Register all licenses for vCenter, vSAN and ESXi in target vCenter
3. Connect to Active-Directory if that connection existed in the source vCenter
4. Create a datacenter object on target site
5. Create a cluster object on target site and enable vSAN
6. Enable HA and DRS (manual or semi-automatic mode) [optional]
7. Configure deduplication and compression according to source vCenter settings
Note-: If encryption was enabled on the source side, the target vCenter must also be connected to the KMS and a trust must be established with the identical cluster ID.
Phase 3
Migrate storage policies.
Export storage policies
Check policies in use
There can be a large number of storage policies in a vSAN cluster. However, only the ones that are actually in use in my Lab.
esxcli vsan debug object list | grep spbmProfileId | sort | uniq
Only policies that have been applied to an object will be returned.
You can export all policies with a PowerCLI command.
Get-SpbmStoragePolicy | Export-SpbmStoragePolicy -FilePath C:\temp\
This command exports all policies as XML files. File name equals the policy name. For example SP-ErasureCoding-R5.xml.
Phase 4
Export of vDS configuration
We now export the vDistributed Switch (vDS) settings from the source cluster.
Networking > select vDS > Settings > Export Configuration
The configuration of the switch will be downloaded to the client as a ZIP archive.
b. Import of vDS configuration
In the target vCenter, select Datacenter > Distributed Switch > Import Distributed Switch.
We’ll import the previously exported ZIP file. In the import dialog, do not select the option “Preserve original distributed switch port group identifiers“.
Phase 5.
Capture uplink and Portgroup details that will be required later when you will add ESXI host to vDS on target vCenter
Do the same with the remaining ESXI host and collect the physical interface details.
The transfer of the host to the new vCenter is carried out one at a time. To keep the vSAN cluster intact in the process, we need to put it in protected mode. ClusterMemberListUpdates from the vCenter are ignored from now on. Meaning, no host is going to leave the cluster and no host is going to be added. This is a crucial point, because we will remove hosts from the source vCenter one by one. Under normal conditions, this will result in member list updates by the vCenter to the remaining hosts and would split our cluster. Therefore, we instruct the hosts to ignore these member list updates coming from the vCenter.
In order to do this, either execute the command shown below on each host, or activate it globally in the cluster via using PowerCLI.
esxcfg-advcfg -s 1 /VSAN/IgnoreClusterMemberListUpdates
We can use PowerCLI as an alternative. You need to adjust the name of the vSAN-Cluster (here: vSAN-Cluster).
Foreach ($vmhost in (Get-Cluster -Name vSAN-Cluster | Get-VMHost)) { $vmhost | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 1 -Confirm:$false }
If executed successfully, the PowerCLI command returns one row of results per host.
Before we start the host migration, DRS in the source cluster should be set to either semi-automatic, or manual
Phase 6.
Migrate Host
Note-: Before migrating host to another vCenter, migrate management, vSAN and remaining vDS portgroup to VSS.
The following procedure is executed with each host one after the other. The sequence is marked with sequence start and sequence end.
Sequence Start
We will disconnect the first host from the source vCenter, acknowledge the warning and after the task is complete, remove the host from inventory.
In target vCenter we’ll select the datacenter object (not the Target_vSAN_Clue) and add the host.
Enter FQDN of the host
Enter root password
Accept certificate warning
Check host details
Assign license. The host usually comes with its original license. This can be reassigned.
Configure lockdown mode (disable)
Choose datacenter as VM-target
Read summary
Finish
Do the same with remaining ESXI hosts and add them in the Datacenter, make sure you are not adding them directly in the cluster.
After the action is complete, the host is located outside the new vSAN cluster. We now drag it into the vSAN cluster object by using the mouse. This intermediate step is necessary because a direct import into the vSAN cluster would trigger a maintenance mode on the host. This must not happen since we are actively running VMs on the host. However, the move action doesn’t trigger a maintenance mode.
Phase 7-:
Adjust vSAN kernel port (source) and target IP address of other cluster members.
Add host to imported vDS
Our vSAN network communication remains functional even though the vDS in the new vCenter is still empty. This is because a distributed vSwitch creates “hidden” standard vSwitches on each host. These move with the host and remain active. In order to be able to manage and monitor the vDS properties of the host in the future, we add it to the imported vDS.
Network > select vDS
Add and manage hosts
select migrated host
define uplinks (same as before)
assign kernel ports to port groups (vSAN, vMotion, Provosioning, etc)
assign VM-networks (if applicable)
check summary
finish
Follow the instruction to add ESXI host in vDS.
Now migrate the port group
vSAN Datastore is reflecting healthy
You can run to see whether VMs are getting auto created or not.