Skip to Content
Cluster StorageBlock StorageUDisk Dynamic Expansion

UDisk Dynamic Expansion

This document mainly describes how to expand the UDisk type of PVC in UK8S, including online expansion and offline expansion scenarios.

1. Limitations

  1. The creation time of the UK8S Node instance must be later than May 2020. If this condition is not met, you must first shut down and then start the Node.

  2. The Kubernetes version is not lower than 1.14. If the cluster version is 1.14 and 1.15, you must configure --feature-gates=ExpandCSIVolumes=true in the /etc/kubernetes/apiserver file on the three Master nodes, and restart APIServer through systemctl restart kube-apiserver. You also need to modify the /etc/kubernetes/kubelet file on the node, add --feature-gates=ExpandCSIVolumes=true, and execute systemctl restart kubelet to restart kubelet. For version 1.14 clusters that need online expansion (pod does not restart), you need to configure the ExpandInUsePersistentVolumes=true feature switch at the same time. Versions 1.13 and below do not support this feature, and versions 1.16 and above do not need to be configured.

  3. CSI-UDisk version is not less than 20.08.1, for CSI version update and upgrade, please refer to: CSI Update Record and Upgrade Guide;

  4. The expected capacity size declared during expansion must be a multiple of 10, unit is Gi;

  5. Only dynamically created PVCs can be expanded, and storageClass must be explicitly declared as expandable (see later);

2. UDisk Expansion Demonstration

2.1 Create a UDisk storage class, explicitly declare expandability

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-udisk-ssd provisioner: udisk.csi.ucloud.cn # provisioner must be udisk.csi.ucloud.cn parameters: type: "ssd" fsType: "ext4" reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true # Must declare that this storage class supports the expansion feature

2.2 Create PVC through this storage class, and mount to Pod

kind: PersistentVolumeClaim apiVersion: v1 metadata: name: udisk-volume-expand spec: accessModes: - ReadWriteOnce storageClassName: csi-udisk-ssd resources: requests: storage: 10Gi --- apiVersion: v1 kind: Pod metadata: name: udisk-expand-test labels: app: udisk spec: containers: - name: http image: uhub.surfercloud.com/ucloud/nginx:1.17.10-alpine imagePullPolicy: Always ports: - containerPort: 8080 volumeMounts: - name: udisk mountPath: /data volumes: - name: udisk persistentVolumeClaim: claimName: udisk-volume-expand

After the Pod starts, let’s check the PV, PVC, and the filesystem size inside the container separately. You can see that they are all currently 10Gi.

# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-25b83584-35de-43e4-ad23-c1fc638a09e2 10Gi RWO Delete Bound default/udisk-volume-expand ssd-csi-udisk 2m26s # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE udisk-volume-expand Bound pvc-25b83584-35de-43e4-ad23-c1fc638a09e2 10Gi RWO ssd-csi-udisk 2m30s # kubectl exec -it udisk-expand-test -- df -h Filesystem Size Used Avail Use% Mounted on ... /dev/vdc 9.8G 37M 9.7G 1% /data ...

2.3 Online Expansion of PVC

Execute kubectl edit pvc udisk-volume-expand and change spec.resource.requests.storage to 20Gi. After saving and exiting, in about a minute, the capacity properties of PV, PVC, and the filesystem size inside the container all become 20Gi.

# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-25b83584-35de-43e4-ad23-c1fc638a09e2 20Gi RWO Delete Bound default/udisk-volume-expand ssd-csi-udisk 2m26s # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE udisk-volume-expand Bound pvc-25b83584-35de-43e4-ad23-c1fc638a09e2 20Gi RWO ssd-csi-udisk 2m30s # kubectl exec -it udisk-expand-test -- df -h Filesystem Size Used Avail Use% Mounted on ... /dev/vdc 20G 37M 19.7G 1% /data ...

At the same time, log in to the UDisk console, and you will find that the displayed capacity of UDisk has also increased to 20Gi. In this way, we have completed the online expansion of the data volume without restarting the Pod and without stopping the service.

In the above example, we completed the online expansion of the data volume. But in high IO scenarios, a small probability of file system abnormalities can occur when the data volume is expanded without restarting Pod. The most stable expansion scheme is to stop the application service first, unmount the directory, and then expand the data volume. Next, we will demonstrate how to stop the service operation.

When step 2.3 is completed, we have a Pod and it has a 20Gi data volume. Now we need to perform offline expansion of the data volume.

  1. Based on the example yaml from the previous text, remove the PVC-related content, create a yaml called udisk-expand-test separately, and only keep the related information of Pod. Then delete the Pod, but keep the PVC and PV.
# kubectl delete po udisk-expand-test pod "udisk-expand-test" deleted

At this time, PV and PVC are still bound to each other, and the corresponding UDisk has been unloaded from the cloud host and is in the available state.

  1. Modify the PVC information, change spec.resource.requests.storage to 30Gi, save and exit.

Wait for about a minute, execute kubectl get pv, when the capacity of PV increases to 30Gi, rebuild Pod. It needs to be noted that when executing kubectl get pvc at this time, the returned PVC capacity is still 20Gi, which is because the file system has not yet been fully expanded, and PVC is in the FileSystemResizePending state.

# kubectl edit pvc udisk-volume-expand persistentvolumeclaim/udisk-volume-expand edited # kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-25b83584-35de-43e4-ad23-c1fc638a09e2 30Gi RWO Delete Bound default/udisk-volume-expand ssd-csi-udisk 20m # kubectl create -f udisk-expand-test.yml

When the Pod is recreated successfully, you can see that the capacity sizes of PV, PVC are both 30Gi, and the file system capacity seen when executing df in the container is also 30Gi.