Connecting Qsan storage systems to servers running Linux operating system

We continue publishing how-to style articles on the use of data storage systems (DSS) Qsan in various typical tasks. This time we will consider the initial configuration of servers based on operating systems (OS) of the Linux family when connecting block volumes from the storage system side.

As practice shows, users of Linux-based OS are more savvy in terms of skills than their colleagues working on OS of other families. However, all of them were once beginners and were forced to learn a lot of new things for themselves. And although historically the Linux community is maximally open, which is confirmed by a wide selection of documentation and forums for communication/help, we believe that another thematic article certainly will not hurt. Especially since more and more people are coming to the world of Linux, including against their will due to current events in the world, both political and economic.

As an example of use, we will consider connecting block storage systems of the Qsan series XCubeSAN using Fibre Channel and iSCSI interfaces to servers based on Linux family OS. In our case, this is RED OS 8, under the hood of which lies the good old CentOS. For other types of Linux OS, the text below is also suitable, since the idea in the setup stages is completely the same. The only differences may be in a slightly different syntax of the commands. We should immediately note that the commands below must be executed as the superuser (root). Part of the text regarding the general rules (where applicable) is taken from our previous publications regarding the setup of VMware ESXi and Microsoft Windows.

So, in general, the setup process can be divided into several main stages:

  • Physical and logical switching

  • Actions on the storage side

  • Actions on the host(s) side

Physical and logical switching

The set of equipment and communication lines between the storage system and servers form the so-called SAN network. Fault-tolerant connection of SAN network participants implies the constant presence of at least one path between the initiator (host) and the target (storage system). Since storage systems now almost always have at least two controllers, each server must have a connection with each of them. In the simplest version, servers are connected to the storage system directly. This operating mode is called Direct Attach. Qsan storage systems support this operating mode. In this case, each server must have a dual-port HBA to connect to each storage system controller. That is, there will be 2 paths between the server and the storage system. With the maximum number of optional ports in this mode, up to 12 servers can be connected to the storage system via iSCSI or up to 8 servers via Fibre Channel.

In most cases, servers are connected to the storage system via switches. For greater reliability, there should be two of them (in general, of course, there can be more, but they are still divided into two groups – factories). This provides protection against failure of the switch itself, the link and the storage system/HBA controller port. In this case, each server and each storage system controller is connected to each switch. That is, there will be 4 paths between each server and the storage system (in the case of two switches).

Important notes regarding SAN network parameters:

  • Factories are not connected to each other to isolate them in case of network errors;

  • If the iSCSI protocol shares switch usage with other services, then all iSCSI traffic should be placed in an isolated VLAN;

  • For the Fibre Channel protocol, it is necessary to configure zoning on switches based on the principle of “one initiator – one or more targets” to eliminate the influence of servers on each other;

  • For iSCSI connections, it is recommended to enable support for large frames (MTU=9000) in order to increase performance. It is important to remember that it is necessary to change the MTU for all network participants: storage controller ports, physical switches, physical and virtual ports of server network cards.

For Qsan, the MTU parameter is changed separately on each port of each controller in the iSCSI Ports menu

On the host side, the MTU parameter also changes for each network port individually:

  • Using the command ifconfig display a list of network adapters

  • Team ifconfig eth0 mtu 9000 set the MTU value, for example, for the eth0 adapter

For instructions on changing the MTU on physical switches if you are using them, we recommend that you refer to the documentation of the specific manufacturer.

Actions on the storage side

The necessary settings on the storage system can be divided into two stages:

Interfaces only need to be configured if the iSCSI protocol is used: you need to specify the IP addresses of the ports on the iSCSI Ports tab. The IP addresses of the ports must be from different subnets so that traffic is routed unambiguously on the host side. For Fibre Channel ports, as a rule, nothing needs to be configured.

Next, you need to create a storage space. First, a pool is created – a group of physical drives working together. There can be several pools within a storage system. Drives within a pool are combined in accordance with the RAID level selected when creating it, ensuring the specified reliability. Pools are created on the Storage → Create Pool tab, where a step-by-step wizard is launched.

  • You must select the pool type: thick (space is allocated immediately) or thin (space is allocated as it fills up). Note that thick pools are more productive. It is also possible to select the Auto Tiering pool type with a pre-activated license and different types of disks.

  • Select specific disks

  • Set the RAID level and some of its parameters, if applicable to this level

    • Subgroups – the number of disks in a subgroup in case of using RAID 50/60

    • Spares – the number of Spare disks within a group in case of using RAID EE

Volumes are then created within the current wizard or later. The only option here is the block size. It is important to note that it is impossible to set the block size smaller than the physical drive has. It is recommended to leave the default value of 512B (for 512N/512e drives) or 4KB (for 4KN drives) as the most optimal.

The final step in setting up the storage system is publishing volumes for access by hosts via the LUN mapping functionality. To do this, host groups are created in the storage system, which combine a set of hosts and a set of volumes to which these hosts should have access. The creation of a host group always begins with specifying the type of interface it will work with: iSCSI or Fibre Channel. Next, the hosts included in the group are specified. By default, access is open to all

. But it is highly recommended to specify specific hosts to prevent unauthorized access and accidental data damage. For iSCSI, hosts are specified manually as their IQN. To do this, they must be added using the Add host button (it is enough to do this once within one group, after which the list of hosts will be available at the global level). For FC, hosts are automatically detected in the SAN network based on their WWPN.

cat /etc/iscsi/initiatorname.iscsi

cat /sys/class/fc_host/host?/port_name

From the RED OS side, the IQN and WWPN of the host can be found out using the commands

The next step specifies the physical ports of the storage system through which input/output will be performed (only in the case of iSCSI)

Finally, a list of volumes to which access will be granted is specified.

Host side actions

service multipathd start

Initially, you need to start the Multipath IO service once on the server, which ensures the operation of multipath input/output.

[root@localhost admin]# find /sys -iname 'scan'

/sys/devices/pci0000:80/0000:80:03.0/0000:82:00.0/host1/scsi_host/host1/scan

/sys/devices/pci0000:80/0000:80:03.0/0000:82:00.1/host2/scsi_host/host2/scan

/sys/devices/pci0000:00/0000:00:03.2/0000:04:00.0/host0/scsi_host/host0/scan

When using a connection via FC, no special manipulations on the host are required. The only thing is that the volumes published on the storage system side need to be detected. To do this, you need to perform a Rescan on the FC HBA ports. To do this, first identify the required adapter

[root@localhost admin]# echo "- - -" > /sys/devices/pci0000:80/0000:80:03.0/0000:82:00.0/host1/scsi_host/host1/scan

[root@localhost admin]# echo "- - -" > /sys/devices/pci0000:80/0000:80:03.0/0000:82:00.1/host2/scsi_host/host2/scan

In our case, these are the first two lines (easily identified by the similar ID, since this is a physically single HBA with two ports). After which we perform Rescan

As a result, two new disk devices with a size of 77 GB were found (in our case, the server is connected directly by two links)

[root@localhost admin]# iscsiadm -m discovery -t st -p 192.168.2.1

In case of using iSCSI connection, it is necessary to obtain a list of targets. This action must be performed for both controllers.

[root@localhost admin]# iscsiadm  -m node -T iqn.2004-08.com.qsan:xs3226-000d60af8:dev0.ctr1 -p 192.168.2.1 -l

[root@localhost admin]# iscsiadm  -m node -T iqn.2004-08.com.qsan:xs3226-000d60af8:dev0.ctr1 -p 192.168.3.1 -l

Next, having received a list of targets, we connect to each of them

[root@localhost admin]# iscsiadm -m session

You can check the list of active sessions using

We check the list of available disk devices and see new 55GB disks (again, the server is connected directly to the storage system by two links).

Now we run multipath and see that the disks corresponding to the same disk device are “glued together”

By default, multipath uses only one of the available paths for input/output. The remaining paths are in standby mode. To use them all at once for load balancing purposes, you need to enable round robin mode. This requires editing the contents of the configuration file /etc/multipath.conf

Example file (wwid for this disk is taken from the output of the previous command)

Example

multipaths {

multipath {

wwid3201b0013780ffac0

alias Qsan-iSCSI

path_selector “round-robin 0”

path_grouping_policy multibus

failback imediate

rr_weight priorities

no_path_retry 5

rr_min_io 1

}

multipath {

wwid3201c0013780ffac0

alias Qsan-FC

path_selector “round-robin 0”

path_grouping_policy multibus

failback imediate

rr_weight priorities

no_path_retry 5

rr_min_io 1

}

}

After any modifications to the file, you must restart the multipath service for the changes to take effect. As a result, we get:

Now all that remains is to format and mount the new disks for further use. This article covered mainly the basic operations required to connect servers running Linux-like OS to a storage system.Qsan

For more complete information, it is highly recommended to read the user manuals provided by both vendors.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *