Why does mounting existing block storage to a pod fail with the wrong file system?
Virtual Private Cloud Classic infrastructure
When you run kubectl describe pod <pod_name>
, you see the following error:
failed to mount the volume as "ext4", it already contains xfs. Mount error: mount failed: exit status 32
You have an existing block storage device that is set up with an XFS
file system. To mount this device to your pod, you created a PV that specified ext4
as your file system or no file system in the spec/flexVolume/fsType
section. If no file system is defined, the PV defaults to ext4
.
The PV was created successfully and was linked to your existing block storage instance. However, when you try to mount the PV to your cluster by using a matching PVC, the volume fails to mount. You can't mount your XFS
block storage
instance with an ext4
file system to the pod.
Update the file system in the existing PV from ext4
to XFS
.
- List the existing PVs in your cluster and note the name of the PV that you used for your existing block storage instance.
kubectl get pv
- Save the PV YAML on your local machine.
kubectl get pv <pv_name> -o yaml > <filepath/xfs_pv.yaml>
- Open the YAML file and change the
fsType
fromext4
toxfs
. - Replace the PV in your cluster.
kubectl replace --force -f <filepath/xfs_pv.yaml>
- Log in to the pod where you mounted the PV.
kubectl exec -it <pod_name> sh
- Verify that the file system changed to
XFS
.
Example output:df -Th
Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/3600a098031234546d5d4c9876654e35 xfs 20G 33M 20G 1% /myvolumepath