With Windows Server 2019, Storage Spaces Direct was one of the core features which have been improved. Storage Spaces Direct (S2D) supports also two node deployments with a Cloud, File Share or USB Witness. This article describes the deployment of such a two node deployment with HP ProLiant DL380 Gen10 Servers.
Hardware
It’s important that you use certified Hardware from the vendors.
You can check if your components are certified at this place: https://www.windowsservercatalog.com
Systems, components, devices, and drivers must be Windows Server 2016 / 2019 Certified and listed in the Windows Server Catalog. Also you should check for the label Software-Defined Data Center (SDDC) Standard and/or Software-Defined Data Center (SDDC) Premium.
The following hardware components are used per node:
Component | Description | Quantity |
---|---|---|
Server | HPE DL380 Gen10 8SFF CTO Server | 1 |
CPU | Intel Xeon Gold 6130 2.10 GHz (16 Cores) | 2 |
Memory | HPE 32GB 2Rx4 PC4-2666V-R | 12 |
NVMe Bays | HPE DL38X NVMe 8 SSD Express Bay | 2 |
SAS Riser 1 | HPE DL38X Gen10 4-port Slim SAS Riser | 1 |
SAS Riser 2 | HPE DL38X Gen10 4p Slim SAS 2nd Riser | 1 |
SAS Riser 3 | HPE DL38X Gen10 2 x8 Tertiary Riser Kit | 1 |
Ethernet Adapter RDMA | HPE Eth 10_25Gb 2P 640SFP28 Adptr | 1 |
Array Controller | HPE Smart Array P408i-a SR Gen10 with Smart Storage Battery | 1 |
Ethernet Adapter 10Gb | HPE Flexfbrc 10Gb 4P 536FLR-T Adptr | 1 |
Fan Kit | HPE DL38X Gen10 High Perf Fan | 1 |
Power Supply | HPE 800W FS Plat Ht Plg LH Pwr Sply Kit | 2 |
Cable for Storage Network | HPE 25Gb SFP28 to SFP28 3m DAC | 1 |
System Disks | HPE 240GB SATA RI SFF SC DS SSD | 2 |
NVMe for S2D Usage | HPE 1.6TB NVMe x4 MU SFF SCN DS SSD | 5 |
So each node gets 384 GB memory and 6.4 TB (5.84 TB useable) storage, as I keep one disk as HotSpare. As we only use two nodes we don’t need a Switch for the storage network, which reduces complexity and saves costs.
Install Windows Server Core
After placing the servers into the rack and do the cabling, we start the server and create a RAID 1 with the two 240 GB SATA SSDs. The NVMe drives we leave in Default HBA Mode. This is new with the Gen10 Array controllers, they support a mixed configuration of Raid and HBA.
Now we install Windows Server Core 2019 Datacenter Edition with an USB Stick. After the installation we need to set the local admin password and log in.
After the installation, don’t forget to install all Windows Updates.
Then install HP Service Pack for ProLiant (SPP) to get the latest firmware and drivers.
Basic network configuration and domain join
As you usually don’t want to stay down in the server room, at latest now you configure iLO for Remote Administration.
You can also right away configure the basic network configuration for management access:
First we rename two of the 1 GB Ethernet adapters and create a Team for the management interface:
Rename-NetAdapter -Name "Embedded LOM 1 Port 1" -NewName "ManagementAdapter1" Rename-NetAdapter -Name "Embedded LOM 1 Port 2" -NewName "ManagementAdapter1" New-NetLbfoTeam -Name Management -TeamMembers ManagementAdapter1,ManagementAdapter2 -TeamingMode SwitchIndependent -LoadBalancingAlgorithm Dynamic
After that we configure the IP address and DNS Servers for this interface:
New-NetIPAddress -InterfaceAlias "Management" -IPAddress "192.168.10.101" -PrefixLength "24" -DefaultGateway "192.168.10.1" Set-DnsClientServerAddress -InterfaceAlias "Management" -ServerAddresses "192.168.10.10,192.168.10.11"
Now we make the server accessible through RDP, for this we need to deactivate (or open the Port for RDP) and also allow RDP:
Set-NetFirewallProfile -Profile Domain,Public,Private -Enabled False Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Terminal Server" -Name "fDenyTSConnections" –Value 0
From now on we can administrate our servers through RDP.
The next step will be to join the server to the domain.
Add-Computer -DomainName intra.domain.ch -NewName "HYV01" -DomainCredential domain\administrator -restart –force
Now we do some further configuration on the network side.
Configuration of the 25 GB storage adapters:
New-NetIPAddress -InterfaceAlias "StorageA" -IPAddress 10.10.10.10 -PrefixLength 24 Set-NetIPInterface -InterfaceAlias "StorageA" -Dhcp Disabled New-NetIPAddress -InterfaceAlias "StorageB" -IPAddress 10.10.11.10 -PrefixLength 24 Set-NetIPInterface -InterfaceAlias "StorageB" -Dhcp Disabled
Rename of the adapters used for the VM Network (for Hyper converged server deployment only):
Rename-NetAdapter -Name "LOM 1 Port 1" -NewName "VMAdapter1" Rename-NetAdapter -Name "LOM 1 Port 2" -NewName "VMAdapter2" Rename-NetAdapter -Name "LOM 1 Port 3" -NewName "VMAdapter3" Rename-NetAdapter -Name "LOM 1 Port 4" -NewName "VMAdapter4"
Setting the interface metric:
Set-NetIPInterface -InterfaceAlias Management -InterfaceMetric 1 Set-NetIPInterface -InterfaceAlias StorageA -InterfaceMetric 2 Set-NetIPInterface -InterfaceAlias StorageB -InterfaceMetric 2
Configure RDMA over Converged Ethernet (RoCE) / PFC
How to get RDMA working and test it is described in the article Configure RDMA over Converged Ethernet (RoCE).
Create the Cluster and enable Storage Spaces Direct (S2D)
After network configuration is done and the necessary features are installed we can build the cluster.
Before we can create a new cluster, we need to do a cluster validation:
Test-Cluster –Node HYV01,HYV02 –Include "Storage Spaces Direct", "Inventory", "Network", "System Configuration"
If the HTML Report is fine we can proceed with creating the cluster.
New-Cluster –Name HYVCLU01 –Node HYV01,HYV02 –NoStorage -StaticAddress 192.168.10.100
As we have a two node deployment, we need to configure a file share witness (also Cloud Witness possible):
Set-ClusterQuorum -FileShareWitness \\server\Witness -Credential (Get-Credential)
Now you can simply run the Enable S2D command:
Enable-ClusterStorageSpacesDirect
Modify usage of one disk per node to HotSpare (optional):
Get-PhysicalDisk id | Set-PhysicalDisk -Usage HotSpare Get-PhysicalDisk id | Set-PhysicalDisk -Usage HotSpare
From now we can create CSV volumes:
New-Volume -FriendlyName "CSV1" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -ResiliencySettingName Mirror -Size 2.5TB New-Volume -FriendlyName "CSV2" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -ResiliencySettingName Mirror -Size 2.5TB