Windows.  Viruses.  Laptops.  Internet.  Office.  Utilities.  Drivers

If you have ever managed servers or corporate computer network, then you probably faced the problem of transparently increasing the capacity of the existing infrastructure. And although such solutions, in principle, exist, they are usually characterized by high prices and low flexibility.

19" systems usually don't have enough space to accommodate additional hard drives. As a result, the only alternative appears: connecting individual 19″ storage devices to the server via SCSI interface or Fiber Channel. However, at the same time, we still mix server tasks and data storage functions.

And large server cases with additional bays for hard drives This is also not an ideal solution - again, we get a mixture of tasks.

Agree that ideal storage should be very flexible. So that it can be easily deployed, used from many parts of the network, from different operating systems and, of course, can be easily expanded. And performance should not be overlooked. The answer to all the questions posed can be called iSCSI - Internet SCSI. This solution packages the SCSI protocol into TCP/IP packets, resulting in a universal storage interface for the entire network infrastructure. In addition, iSCSI allows you to consolidate your current storage systems.

How does iSCSI work?



The diagram shows how iSCSI works. Storage subsystems must use existing network infrastructure, independent of servers. The storage consolidation we mentioned above simply means that the storage should be accessible from any server, minimizing management costs. In addition, additional capacity can be added to existing systems.

The advantages of this approach are many, and they are quite obvious. Many corporations already have an efficient network infrastructure in place, often using time-tested technologies such as Ethernet. There is no need to implement or test any new technologies to use iSCSI or other systems such as SAN (Storage Area Networks). Of course, here you can save on expensive implementation specialists.

In general, anyone can manage iSCSI clients and servers with a little training. network administrator. After all, iSCSI is deployed on existing infrastructure. Additionally, iSCSI is highly available because iSCSI servers can be connected to multiple switches or network segments. Finally, the architecture is inherently highly scalable thanks to Ethernet switching technologies.

In principle, an iSCSI server can be implemented in either software or hardware. But due to the high load software solution for a processor it is better to stick to the last option. The main burden on an iSCSI server is encapsulating SCSI packets into TCP/IP packets, all of which must be done in real time. It is clear that in the software server all these tasks will be performed CPU, and in hardware solution— special TCP/IP and SCSI engines.

Thanks to the iSCSI client, the storage resources of an iSCSI server can be integrated into the client system in the form of a device that is similar in meaning to a local hard drive. There is a big advantage here compared to the usual common network folders(share) will high security. After all, iSCSI emphasizes the correct authentication of iSCSI packets, and they are transmitted over the network in encrypted form.

Of course, you will get slightly less performance than local systems SCSI - after all, the network introduces its own delays. However, modern networks with a bandwidth of up to 1 Gbit/s (128 MB/s) already provide sufficient speed, but most of it is never used.

Each iSCSI node is assigned its own name (up to a maximum of 255 bytes in length) and an alias (short name), which do not depend on the IP address. Thus, access to the storage will be ensured even after it is transferred to another subnet.

iSCSI in action

Of course, apart from the network, the main requirement for implementing iSCSI is the organization of an iSCSI server. We tested several solutions, both software and hardware.

Both types of solutions satisfy all iSCSI requirements, providing storage access to client computers. The client system can be equipped with an iSCSI adapter, which will reduce the load on the central processor (very convenient for workstations).

In principle, iSCSI can be used on a 100 Mbit/s network, but then, compared to local drives, you will experience a significant slowdown. Naturally, Gigabit Ethernet is a much more efficient solution - bandwidth is unlikely to become a bottleneck even when using multiple RAID 5 arrays. At the same time, this cannot be said about RAID arrays 0, but such storage is rarely connected over the network.

If you turn to the client, then an iSCSI initiator is needed. They have been released for almost all operating systems. A Google search for the combination of "Microsoft", "iSCSI" and "Initiator" is a good example of this.

Then, in the initiating program, you need to configure a connection to the server. The connected server drives will appear on the computer as hard drives, and they can be used like regular drives.

The iSCSI protocol provides IPsec-based packet encryption, although it is not required. For example, it does not always make sense to encrypt packets within a corporate network. This option will be most interesting for WAN.

Additional Applications

iSCSI is also an excellent means of data backup, because information can be easily copied to another HDD. Including even online, using the shadow function Windows copy. iSCSI can even be connected via a DSL connection, but here the limiting factor will be the line speed. However, it all depends on the nature of the application.

The big advantage of iSCSI is that classic redundancy is no longer limited to one location - and this should not be underestimated. For example, devices such as cassette tape drives can now be installed anywhere on the network. Even if the worst happens, iSCSI data can be recovered in minimal time.

If the iSCSI solution is implemented in software, then the network adapter will have to transfer a lot of data. Since conventional network adapters do not always use various hardware acceleration technologies, some of the load may be transferred to the central processor. SCSI is a block protocol, while Ethernet is a packet protocol. That is, a lot of the workload will be related to encapsulating and extracting SCSI information from TCP/IP packets. Such a task can load even a modern processor to capacity.

To solve the problem, special TOE engines (TCP/IP Offload Engines) were developed, which take care of all complex iSCSI operations immediately after the network adapter. As a result, the load on the system processor is reduced, and users and the system can continue to function normally.

I hope now it has become a little more clear what network storage on iSCSI is and how they work.

After five years of working with Fiber Channel storage area networks (SANs), I was very puzzled by the advent of iSCSI: what the protocol does and, more importantly, how it works and how iSCSI can be used to solve real-world problems for users. So, after several intense months of talking with many experts on this topic, I present in this article a few of my own views on iSCSI.

What exactly is iSCSI?

iSCSI sends SCSI commands in IP packets. In more detail, iSCSI is designed as a protocol for a storage initiator (usually a server) to send SCSI commands to an executor (usually tape or disk) via IP.

Other protocols: FCIP - sends Fiber Channel blocks over IP, essentially extending Fiber Channel connections; doesn't really have anything to do with SCSI. On the other hand, iFCP provides mapping of FCP (serial SCSI over Fiber Channel) to and from IP. In other words, it offers a Fiber Channel (fabric) routing protocol that allows connectivity over IP.

In other words, iSCSI is a SCSI protocol over IP that connects the server to the data storage. Other protocols provide Fiber Channel to Fiber Channel connections with varying degrees of intelligence.

How do iSCSI devices find each other?

In the case of regular SCSI connections and Fiber Channel loops, the device discovery method is quite primitive. For Fiber Channel (fabric) networks, there is a necessary service called a Simple Name Server, or simply a domain name server, that works with hundreds or thousands of devices. But in IP, theoretically, there can be several million devices.

There are currently two mechanisms used to discover iSCSI devices in the IP world. The first is SLP (service locator protocol) - a protocol of the TCP/IP family that allows automatic configuration of various resources. This service discovery protocol has already been around in the IP world for some time. However, recently many manufacturers, including Microsoft, began to develop a new protocol - Internet Simple Name Server. Simply put, it took the principles of a simple domain name server for Fiber Channel and then scaled it up to handle the size of IP networks without losing the storage features of SLP.

How can iSCSI be used?

There are three main ways to use iSCSI:
  1. A specialized iSCSI server that accesses specialized iSCSI storage.
  2. A specialized iSCSI server that accesses Fiber Channel-attached storage through an iSCSI-to-Fiber Channel router.
  3. Fiber Channel server accessing iSCSI storage through a Fiber-Channel-to-iSCSI router.
Of course, in some cases, a Fiber Channel storage accesses another Fiber Channel storage (for example, to copy a disk or off-server Reserve copy) and the iSCSI storage device can also access each of them.

So what is most likely and/or practical to use? To answer this question, we need to step back a little and remember that networked storage requires flexibility, using products in different ways. Today, using iSCSI in servers is relatively new, but easy, given Microsoft's support for Windows Server 2000 and 2003.

For this reason, one way to use iSCSI is to use iSCSI servers attached to existing Fiber Channel storage via an iSCSI-to-Fiber Channel router, most likely in a Fiber Channel SAN. This means that the same ports on the same storage arrays can provide storage service to both Fiber Channel and iSCSI servers. Therefore, this allows you to get more benefits from using SAN and Fiber Channel storage than you already have, and you can do it right now - the market offers all the necessary products.

According to my assumptions, similar events will occur in the NAS market; in fact, they are already happening. Since NAS devices already connect drives to IP networks, sharing services via Network File System (NFS) and/or Common Internet File Access Protocol (CIFS), it is easy for NAS to transfer block-level data through the same ports with using iSCSI, again allowing you to use existing storage solutions in new ways.

There are several other interesting and non-standard solutions awaiting the emergence of dedicated iSCSI-only storage, which can work perfectly in a new location where storage consolidation has not yet been carried out, and only products of one solution exist.

Who will use iSCSI?

As an expert who has worked in the Fiber Channel field for several years, I unfortunately have to point out to the Fiber Channel world that iSCSI can run at wire speed and can definitely run as fast as any normal server performing tasks any normal application. For the IP community, it is necessary to note the significant prevalence of Fiber Channel, especially when comparing their number with the number of 1 GB network ports, rather than with the number of other network ports. It is important for the Fiber Channel community to note that while a lot of storage and even a significant number of powerful servers are connected to Fiber Channel, there are a number of unconnected Unix servers and a huge number of Intel servers that do not work with Fiber Channel.

So, iSCSI can work for everyone, but perhaps the biggest potential market is Intel servers, as well as high-density and ultra-thin servers (Intel or others). In addition, iSCSI can sometimes be used for high-performance servers, in the case of remote offices to access a central data center via a SAN, and in other cases where it is too early to use Fiber Channel, after all, there are still many servers and storage not connected to the network data.

NIC, TOE and HBA: When should they be used?

In conclusion, there are three approaches to connecting a server:
  1. Standard Interface Card (NIC) with iSCSI Driver
  2. TOE (TCP Offload Engine) NIC with iSCSI driver
  3. HBAs (Host Bus Adapter) created for iSCSI by traditional Fiber Channel adapter manufacturers.
In what cases should each of them be used? Interest Ask. The initial assumption is that the more performance you need, the more likely it is that you will use a TOE card or host bus adapter instead of a standard interface card (NIC), which of course will be more expensive. Another school of thought suggests that some high-end servers have enough clock cycles to spare, so why not save money and use a cheap network card.

The key point here is that, unlike Fiber Channel adapters, iSCSI pricing ranges from low (free) to high performance (accelerators) and thus can be tailored to suit application requirements. Also, the output load capacity (fan-out or oversubscription) allows the use of more cost-effective Ethernet ports (both fast and GE) instead of the ports of specialized FC switches, which further reduces costs. With iSCSI TOE cards costing $300 or less, host attachment overhead is significantly lower than with FC, even for TOE performance.

Since FC can run at 2Gbps, using Fiber Channel is more preferable for high-end servers (2Gb Ethernet doesn't exist), although to be fair there aren't many servers that use it throughput, even on Fiber Channel. Of course, from a storage perspective, 2Gbps is more likely until we see 10Gb FC or even 10Gb Ethernet/iSCSI ports. iSCSI opens the door to hundreds or thousands of servers, especially Intel systems, many of which may be less demanding, and many of which are yet to benefit from network-attached storage.

Only time will tell what exactly will happen, although one thing is certain - it will be a very interesting year for network storage and iSCSI.

Internet Small Computer System Interface (iSCSI) is a data transfer protocol designed for exchanging data between servers and storage systems (Storage Area Network, SAN). iSCSI is a combination of the SCSI protocol and the TCP/IP protocol stack and is designed to transfer blocks of data over Ethernet networks. SCSI control commands are sent within IP packets, and TCP provides flow control and reliability of data transfer.

When using iSCSI, data between the server and storage system is transferred in blocks, in raw form. This allows you to use the SAN almost as if they were connected to the server directly, rather than over the network. The host system can create logical partitions on the SAN, format them and use them like regular local hard drives. This is the main difference between a SAN and Network Area Storage (NAS), which operate at the file system level and use file transfer protocols such as SMB or CIFS.

iSCSI technology was developed as a more cheap alternative Fiber Channel (FC). iSCSI based systems support standard protocols and can be built on the basis of any existing network infrastructure that supports the IP protocol. To operate, iSCSI can use the most common network devices(switches, routers, network adapters, etc.), while FC requires special HBAs, optical cables and other expensive equipment.

The iSCSI architecture is client-server and includes the following components:

iSCSI Initiator- a client component that sends connection requests to the iSCSI Target component located on the server side. The initiator can be implemented in software, in the form of a driver, or in hardware, in the form of a special iSCSI adapter.

iSCSI Target- a server component that listens to client requests and establishes a connection between the client and the iSCSI server. In addition, the target is associated with iSCSI virtual disks, and after the connection is established, all virtual disks,associated with this target are made available through the initiator. An iSCSI Target can be either a specialized storage system or a regular one. Windows server with the iSCSI Target role installed.

iSCSI virtual disks - used for splitting disk space to logical partitions (Logical Unit Number, LUN). In Windows Server 2012, iSCSI LUNs are regular virtual disks in the VHD\VHDX format. By the way, in Windows Server 2012, only the VHD format was supported for iSCSI, which placed a 2TB limit on the maximum LUN size. Windows Server 2012 R2 uses the VHDX format, which allows you to create LUNs up to 64TB in size.

Now let’s stop and clarify some points:

Each iSCSI server can have one or more iSCSI Targets;
Each iSCSI Target can be connected to one or more virtual disks;
Each iSCSI Target can serve one or more connections from an iSCSI Initiator;
In turn, each iSCSI Initiator can connect to one or more iSCSI Targets and, therefore, to one or more virtual disks.

Additionally, Windows Server 2012 supports a loopback configuration in which both the Target and Initiator can reside on the same server.

IN operating systems Microsoft support for iSCSI has been around for quite some time. First Microsoft version iSCSI Initiator was installed as separate component in Windows 2000, Windows XP SP2 and Windows Server 2003 SP1, and starting with Windows Server 2008 and Vista iSCSI Initiator was built into the operating system.

As for iSCSI Target, it was originally part of a special version of the Windows Data Storage Server 2003 server OS, which was intended for building storage systems and was supplied only in a pre-installed form. However, since 2011, Microsoft iSCSI Software Target 3.3 has been available for download and installation on Windows Server 2008R2, and in Windows Server 2012 it is fully integrated into the system and installed as a server role.

Let's finish the theoretical part and start practicing. To set up, we’ll take the simplest option; we’ll use two servers with installed Windows Server 2012 R2: SRV2 for the iSCSI Target role and SRV3 for the iSCSI Initiator role.

Starting the iSCSI Initiator service

First, let's check the status of the initiator service on SRV3. To do this, open Server Manager and select “iSCSI Initiator” from the “Tools” menu.

As you can see, by default the service is not running. By clicking on “Yes” in the dialog box, we will start the iSCSI Initiator service and put it in automatic startup mode.

Then in the properties window go to the “Configuration” tab and remember the IQN value, it will be useful to us when setting up the server.

IQN (iSCSI qualified name) is a unique identifier assigned to each iSCSI Target and Initiator. IQN is formed from the date (month and year) of domain registration, the official domain name written in reverse order and any arbitrary name, such as the server name. It turns out something like this: iqn:1991-05.com.microsoft:srv3.contoso.com

You can start the iSCSI Initiator service and set its launch mode from the PowerShell console using the following commands:

Start-Service msiscsi
Set-Service msiscsi -StartupType automatic

Installing the iSCSI Target Server role

Now let's move on to SRV2 and start setting up the server part. The first thing we need to do is install the iSCSI Target role on the server. Open Server Manager, follow the link “Add roles and features”

And select the “iSCSI Target Server” role, which is located in the File and Storage Services\File and iSCSI Services section.

Or use the PowerShell command:

Install-WindowsFeature -Name FS-iSCSITarget-Server

Preparing the Disk

Now let's prepare a physical disk that will be used to store virtual iSCSI disks. A new 120GB hard drive is connected to the server specifically for this purpose. On this moment the disk is inactive (Offline). To activate it in Server Manager, go to the File and Storage Services -> Disks section, click on the disk and transfer it to Online.

Now you need to create a new partition (or volume) on this disk, for which context menu select New Volume.

Select the physical disk on which the volume will be created

indicate the volume size

and select the drive letter.

Then we select the file system for the disk, the sector size and specify the volume label. Let me remind you here that iSCSI virtual disks can only be created on NTFS volumes, the new ReFS (Resilient) file system File System) is not supported.

We look at the summary information, and if everything is correct, then click “Create”, starting the creation of the volume.

The same steps can be done with using PowerShell. Find the required disk:

Get-Disk | where ($_.OperationalStatus -eq ″Offline″)

We translate it online:

Set-Disk -Number 1 -IsOffline $false

Initialize:

Initialize-Disk -Number 1

Create a section:

New-Partition -DiskNumber 1 -UseMaximumSize -DriveLetter D

And format it to NTFS:

Format-Volume -DriveLetter D -FileSystem NTFS -NewFileSystemLabel ″iSCSI Storage″

Creating iSCSI virtual disks

Our next point program is underway creating virtual iSCSI disks. To do this, go to the iSCSI section and click on the link, launching the next wizard.

Select the volume on which the virtual disk will be stored.

Give the disk a name and description.

Specify the size of the virtual disk and its type. You can choose from three options:

Fixed size - disk being created immediately occupies the entire allocated volume. This is the most productive, but least economical option;
Dynamically expanding - a disk of minimal size is initially created, which is then dynamically changed depending on the amount of data written to it. Best option in terms of disk space usage;
Differencing - in this option you need to specify the location of the parent disk to which the created disk will be associated. A difference disk can be either fixed or dynamic, depending on the type of parent. This type of disk has its advantages, but I personally don’t see much point in using them for iSCSI.

Now you need to specify the iSCSI Target to which you will connect this disk. Since no target has been created on the server, select “New iSCSI target”.

We give the target a name and description.

And we indicate the servers that can access it.

When choosing servers, you can use two methods. If the initiator is on Windows Server 2012 or Windows 8, then you can simply click “Browse” and select the desired server from the list. For older systems, you must manually enter the server ID. You can specify the initiator's IQN as an identifier, DNS name or the IP address of the server, or the MAC address of the network adapter.

Go ahead. On the next page you can configure CHAP authentication between servers. CHAP (Challenge Handshake Authentication Protocol) is a protocol for verifying the authenticity of a connection partner, based on the use of a shared password or secret. For iSCSI, you can enable either one-way or two-way (reverse) CHAP authentication.

We check that the settings are correct and start creating the disk.

Let's try to do the same using PowerShell. Let's create another 20GB virtual iSCSI disk with the command:

New-IscsiVirtualDisk -Path D:\iSCSIVirtualDisks\iSCSI2.vhdx

Please note that by default a dynamic disk is created; to create a fixed-size VHD you must use the key -UseFixed.

Now we create a second iSCSI Target named iscsi-target-2 and specify IQN SRV3 as the access server:

New-IscsiServerTarget -TargetName iscsi-target-2 -InitiatorIds ″IQN:iqn.1991-05.com.microsoft:srv3.contoso.com″

And check the result with the command:

Get-IscsiServerTarget | fl TargetName, LunMappings

Connection

We return to SRV2, open the initiator properties window, go to the Discovery tab and click the Discover Portal button.

Enter the name or IP address of the portal and click OK.

By default, iSCSI uses all available IP addresses, and if you want iSCSI traffic to go only through a specific network interface, then you need to go to the advanced settings and specify the desired IP in the “Connect using” field.

Now go to the Targets tab, where all iSCSI Targets available for connection should be displayed. Select the desired target and click “Connect”.

Don’t forget to check the “Add this connection to the list of Favorite Targets” checkbox, which ensures automatic connection to the target when the machine is turned off or rebooted.

The connection is successful, and if you open the Disk Management snap-in, a new disk will appear there. Then we proceed with this disk in the same way as with a regular one. hard drive, connected locally - we transfer it to Online, initialize it, create partitions on it and format it.

The same thing can be done using PowerShell. We display a list of available targets:

Get-IscsiTarget | fl

And connect to what you need:

Connect-IscsiTarget -NodeAddress ″iqn.1995-05.com.microsoft:srv2-iscsi-target-2-target″ -IsPersistent $true

Key -IsPersistent $true Provides automatic connection when turned off or rebooted.

Well, to disconnect, you can use the Disconnect-IscsiTarge command, like this:

Disconnect-IscsiTarget -NodeAddress ″iqn.1995-05.com.microsoft:srv2-iscsi-target-2-target″ -Confirm:$false

Conclusion

This completes the setup. As I said, this is the simplest, most basic option for setting up storage. There are many more interesting features in iSCSI. For example, you can use iSCSI Name Service (iSNS) for ease of management, multipath input/output (MPIO) for fault tolerance, and configure CHAP authentication and IPSec traffic encryption for security. I plan to write about some of these features in future articles.

In conclusion important points, which must be taken into account when organizing an iSCSI storage system:

It is advisable to deploy iSCSI on a fast network, at least Gigabit Ethernet;
It is recommended to separate iSCSI network traffic from other traffic and place it on a separate network, for example, using a VLAN or physical division into subnets;
To ensure high availability at the network level, it is necessary to use MPIO technology or multi-connection sessions (MCS). An association network adapters(NIC Teaming) for connecting to iSCSI storage devices is not supported;
When using Storage Spaces technology, you can store iSCSI virtual disks on Storage Spaces, but you cannot use iSCSI LUNs to create Storage Spaces;
Cluster Shared Volume (CSV) cannot be used to store iSCSI virtual disks.

Abstract: how open-iscsi (ISCSI initiator in Linux) works, how to configure it and a little about the ISCSI protocol itself.

Lyrics: There are many articles on the Internet that explain quite well how to configure an ISCSI target, however, for some reason, there are practically no articles about working with the initiator. Despite the fact that the target is technically more complex, there is more administrative fuss with the initiator - there are more confusing concepts and not very obvious operating principles.

ISCSI

Before talking about ISCSI - a few words about different types remote access to information in modern networks.

NAS vs SAN

There are two methods of accessing data located on another computer: file (when a file is requested from a remote computer, and no one cares about which file systems this is done), typical representatives of NFS, CIFS (SMB); and block - when blocks from disk media are requested from a remote computer (similar to how they are read from hard drive). In this case, the requesting party creates a file system for itself on the block device, and the server giving the block device does not know about file systems On him. The first method is called NAS (network attached storage), and the second is called SAN (storage area network). The names generally indicate different signs (SAN implies a dedicated network to storage), but it so happens that NAS are files, and SAN are block devices over a network. And although everyone (?) understands that these are incorrect names, the further they go, the more they become fixed.

scsi over tcp

One of the protocols for accessing block devices is iscsi. The letter "i" in the name does not refer to Apple products, but to Internet Explorer. At its core it is "scsi over tcp". The SCSI protocol itself (without the letter "i") is a very complex design, since it can work through different physical media (for example, UWSCSI - parallel bus, SAS - serial - but they have the same protocol). This protocol allows you to do much more than just “connect disks to the computer” (as was invented in SATA), for example, it supports device names, the presence of several links between a block device and a consumer, switching support (yup, a SAS switch, even such exists in nature), connecting several consumers to one block device, etc. In other words, this protocol was simply being asked to be the basis for a network block device.

Terminology

The following terms are accepted in the SCSI world:
target- the one who provides the block device. The closest analogue from the ordinary computer world is a server.
initiator- client, the one who uses the block device. Client analogue.
WWID- unique device identifier, its name. Analogous to a DNS name.
LUN- the number of the “piece” of the disk being accessed. The closest analogue is a partition on a hard drive.

ISCSI brings the following changes: WWID disappears, and in its place comes the concept of IQN (iSCSI Qualified Name) - that is, a pure name, confusingly similar to DNS (with minor differences). Here is an example of an IQN: iqn.2011-09.test:name.

IETD and open-iscsi (server and client for Linux) bring another very important concept that is most often not written about in iscsi manuals - portal. Portal is, roughly speaking, several targets that are advertised by one server. There is no analogy with www, but if a web server could be asked to list all its virtualhosts, then this would be it. portal specifies a list of targets and available IPs that can be accessed (yes, iscsi supports multiple routes from initiator to target).

target

The article is not about target, so I give it a very short description what target does. He takes a block device, slaps a name and LUN on it and publishes it on his portal, after which he allows everyone (authorization according to taste) to access it.

Here is an example of a simple configuration file, I think it will make it clear what target does (configuration file using IET as an example):

Target iqn.2011-09.example:data IncomingUser username Pa$$w0rd Lun 0 Path=/dev/md1

(complex differs from simple only in export options). Thus, if we have a target, then we want to connect it. And here things get complicated, because the initiator has its own logic, it is not at all similar to the trivial mount for nfs.

Initiator

Open-iscsi is used as the initiator. So, the most important thing is that he has operating modes And state. If we give a command in the wrong mode or do not take into account the state, the result will be extremely discouraging.

So, operating modes:

  • Search for targets (discovery)
  • Connecting to target
  • Working with a connected target
From this list, the life cycle is quite clear - first find, then connect, then disconnect, then connect again. Open-iscsi keeps the session open even if the block device is not in use. Moreover, it keeps the session open (up to certain limits, of course), even if the server is rebooted. An iscsi session is not the same as an open TCP connection; iscsi can transparently reconnect to the target. Disconnecting/connecting are operations that are controlled “from the outside” (either from other software or by hand).

A little about the condition. After discovery open-iscsi remembers all found targets (they are stored in /etc/iscsi/), in other words, discovery is a constant operation, NOT at all corresponding to, for example, dns resolving). Found targets can be deleted manually (by the way, a common mistake is when open-iscsi , as a result of experiments and configuration, a bunch of found targets, when trying to log in, many errors creep into them due to the fact that half of the targets are old config lines that no longer exist on the server for a long time, but are remembered by open-iscsi) Moreover, open-iscsi allows you to change the settings of a remembered target - and this “memory” affects further work with targets even after a reboot/restart of the daemon.

Block device

The second question that bothers many at first is where does it go after connecting? open-iscsi creates, although it is a network device, but a BLOCK SCSI class device (it’s not for nothing that it is “I say”), that is, it receives a letter in the /dev/sd family, for example, /dev/sdc. The first free letter is used, because for the rest of the system, this block device is a typical hard drive, no different from one connected via usb-sata or simply directly to sata.

This often causes panic of “how can I find out the block device name?” It appears in the verbose output of iscsiadm (# iscsiadm -m session -P 3).

Authorization

Unlike SAS/UWSCSI, ISCSI is available for anyone to connect to. To protect against such, there is a login and password (chap), and their transfer to iscsiadm is another headache for novice users. It can be done in two ways - changing the properties of the previously found target and writing the login/password in the configuration file open-iscsi.
The reason for such difficulties is that the password and login process are not attributes of the user, but of the system. ISCSI is a cheap version of the FC infrastructure, and the concept of “user” in the context of a person at the keyboard does not apply here. If your sql database is on an iscsi block device, then of course you will want the sql server to start itself, and not after a minute of personal attention from the operator.

The configuration file

This is a very important file, because in addition to the login/password, it also describes the behavior of open-iscsi when finding errors. It may not return the error immediately, but after a certain pause (for example, about five minutes, which is enough to reboot the server with the data). It also controls the login process (how many times to try, how long to wait between attempts) and any fine tuning of the work process itself. Note that these parameters are quite important for operation and you need to understand how your iscsi will behave if you remove the power cord for 10-20 seconds, for example.

Quick reference

I don’t really like to quote easily found manas and lines, so I’ll give a typical scenario for using iscsi:

First we find the targets we need, for this we must know the IP/dns name of the initiator: iscsiadm -m discovery -t st -p 192.168.0.1 -t st is the send targets command.

Iscsiadm -m node (list of found for login)
iscsiadm -m node -l -T iqn.2011-09.example:data (log in, that is, connect and create a block device).
iscsiadm -m session (list what you connected to)
iscsiadm -m session -P3 (print the same, but in more detail - at the very end of the output there will be an indication of which block device belongs to which target).
iscsiadm - m session -u -T iqn.2011-09.example:data (log out of a specific one)
iscsiadm -m node -l (log in to all detected targets)
iscsiadm -m node -u (log out of all targets)
iscsiadm -m node --op delete -T iqn.2011-09.example:data (remove target from detected ones).

mulitpath

Another issue that is important in serious decisions is support for multiple routes to the source. The beauty of iscsi is in the use of a regular IP, which can be processed in the usual way, like any other traffic (although in practice it is usually not routed, but only switched - the load there is too great). So, iscsi supports multipath in the “do not resist” mode. By itself, open-iscsi cannot connect to several IPs of one target. If it is connected to several IPs of one target, this will lead to the appearance of several block devices.

However, there is a solution - multipathd, which finds disks with the same identifier and processes them as expected in multipath, with customizable policies. This article is not about multipath, so I won’t explain the mystery of the process in detail, however, here are some important points:

  1. When using multipath, you should set small timeouts - switching between faulty paths should happen quickly enough
  2. In conditions more or less fast channel(10G and above, in many cases gigabit) load parallelism should be avoided, since the ability to use bio coalesing is lost, which in some types of load can unpleasantly hit the target.

The NAS supports built-in iSCSI (Internet Small Computer System Interface) service for use in server clusters and virtualized environments.

On this page, users can enable/disable the iSCSI service, change the iSCSI portal port, enable/disable the iSNS service, and list and manage all iSCSI targets and LUNs. The NAS supports multiple iSCSI targets and multiple LUNs per target. iSCSI LUNs can be mounted and unmounted for a specific purpose. This chapter contains the following sections.

The table below shows the features supported by block LUNs and file LUNs.

File LUN (old type)

Full copy by VAAI

Supported

Supported

Resetting blocks by VAAI

Supported

Supported

Hardware locking using VAAI

Supported

Supported

Fine configuration and space release according to VAAI

Supported

Not supported

Dynamic Capacity Provisioning

Supported

Supported

Freeing up space

Supported (via VAAI or Windows 2012/Windows 8)

Not supported

Microsoft ODX

Supported

Not supported

LUN Backup

Supported

Snapshot LUN

Supported

1 Snapshot

Please note that block LUNs typically provide higher system performance and therefore it is recommended to use block LUNs where possible.

There are two ways to provision LUNs: thin provisioning and flash provisioning.

You can create up to 256 iSCSI targets and LUNs. For example, if there are 100 targets created on the NAS, the maximum number of LUNs available for creation is 156. Multiple LUNs can be created for each target. However, the maximum number of concurrent connections to iSCSI targets supported by the NAS varies depending on the network infrastructure and application performance. Too much a large number of Concurrent connections may impact NAS performance.

iSCSI Quick Setup Wizard

To configure the iSCSI target service on the NAS, follow these steps:

6. Specify authentication options and click Next. When you enable the Use CHAP authorization option, the iSCSI target will authenticate only to the initiator, and initiator users will be prompted to enter the username and password specified here to access the target. Enabling the Shared CHAP option enables two-way authentication between the target and the iSCSI initiator. The target authenticates the initiator using the first set of username and password. The initiator authenticates the target using the parameters specified in the Collaborative CHAP section. The username and password in both fields have the restrictions described below.

Creating iSCSI Targets

To create an iSCSI target, follow these steps:

5. Provide a username and password for the Use CHAP Authorization and/or Shared CHAP option and click Next. When you enable the Use CHAP authorization option, the iSCSI target will authenticate only to the initiator, and initiator users will be prompted to enter the username and password specified here to access the target. Enabling the Shared CHAP option enables two-way authentication between the target and the iSCSI initiator. The target authenticates the initiator using the first set of username and password. The initiator authenticates the target using the parameters specified in the Collaborative CHAP section.

Creating an iSCSI LUN

To create a LUN for an iSCSI target, follow these steps:

To create an unbound iSCSI LUN, select the "Do not bind to target" option in step 4.

An untethered LUN will be created and will appear in the list of untethered iSCSI LUNs.

The table below provides a description of all iSCSI targets and LUN states.

Position

State

Description

iSCSI target

Ready

The iSCSI target is in a ready state, but no initiators are connected to it.

Connected

An initiator is connected to the iSCSI target.

Disabled

Connections to the iSCSI target were lost.

Offline

The iSCSI target is disabled and connections from the initiators are not possible.

Included

The LUN is enabled for connection and is visible to authorized initiators.

Turned off

The LUN is deactivated and not visible to initiators.

The table below describes the actions available to manage iSCSI targets and LUNs (Action button).

Action

Description

Disable

Disables a target that is in the Ready or Connected state. Please note that all connections from initiators will be terminated.

Activate

Activation of a target that is in the “Offline” state.

Change

Change target settings: target alias, CHAP information, and checksum settings.

Changing LUN settings: LUN allocation, name, disk volume directory, etc.

Delete

Removing an iSCSI target. All connections will be terminated.

Disable

Disable LUN. All connections will be terminated.

Turn on

Enable LUN.

Untie

Unassigns a LUN to a target. Note that before unbinding a LUN, it must first be disabled. Clicking this button will move the LUN to the list of unbound virtual iSCSI LUNs.

Snap

Mapping a LUN to an iSCSI target. This feature is only available for the list of unbound iSCSI LUNs.

Show connections

View the iSCSI target's connection status.

Switching iSCSI LUNs between targets

To switch iSCSI LUNs between targets, follow these steps:

After you create iSCSI targets and LUNs on the NAS, you can use the iSCSI initiator installed on your computer (Windows, Mac, or Linux PC) to connect to the iSCSI target and LUNs and use disk volumes as virtual disks on your computer.

Increasing iSCSI LUN capacity

The NAS supports capacity expansion for iSCSI LUNs. To do this, follow these steps:

If you notice an error, select a piece of text and press Ctrl+Enter
SHARE: