This is unreleased documentation for SUSE® Storage 1.10.0 (Dev). |
UBLK Frontend Support
Starting with v1.9.0, SUSE Storage supports the UBLK front-end for V2 Data Engine Volumes. This feature exposes V2 Data Engine volumes as a block device using the UBLK SPDK framework. In certain high-specification environments (for example, machines with fast SSDs capable of millions of IOPS and equipped with 32 CPU cores), the UBLK front-end might offer better performance compared to the default NVMe-oF front-end for V2 Data Engine volumes. For performance comparisons, see the SUSE Storage Performance Investigation wiki page. However, the UBLK front-end is less mature than the default NVMe-oF front-end (see Known Limitations). The UBLK front-end also has additional restrictions, as detailed below.
Prerequisites
-
The kernel version on nodes must be v6.0 or later. The UBLK kernel driver is available only from kernel v6.0 onwards.
-
The kernel module
ublk_drv
must be loaded on each node where UBLK volumes are to be attached. For testing, you can load it manually on each relevant node using the command:modprobe ublk_drv
How to use
When creating a V2 volume from a manifest
-
Create a
StorageClass
that specifies the UBLK front-end. For example:kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: my-ublk-frontend-storageclass provisioner: driver.longhorn.io allowVolumeExpansion: true reclaimPolicy: Delete volumeBindingMode: Immediate parameters: numberOfReplicas: "1" staleReplicaTimeout: "2880" fsType: "ext4" dataEngine: "v2" frontend: "ublk"
-
Create a
PersistentVolumeClaim
(PVC) that references theStorageClass
created in the previous step. For example:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-ublk-frontend-pvc namespace: default spec: accessModes: - ReadWriteOnce storageClassName: my-ublk-frontend-storageclass resources: requests: storage: 1Gi
-
SUSE Storage automatically provisions a V2 volume using the UBLK front-end based on the PVC and
StorageClass
definitions.
Known Limitations
When an instance-manager pod crashes, it might leave orphaned UBLK devices on the node. Currently, removing these orphaned devices manually can be difficult and might sometimes require a node reboot. We are investigating this issue further in GitHub Issue #10738.
Reference
Original GitHub issue for UBLK front-end support: GitHub Issue #9456.