pve 無法移除自身zfs的vm檔案

查看zfs的狀態

root@pve:/dev/zvol/zdata# zpool status
   pool: zdata
  state: ONLINE
   scan: scrub repaired 0B in 2h35m with 0 errors on Sun Mar  8 02:59:48 2020
 config:
 NAME                                           STATE     READ WRITE CKSUM zdata                                          ONLINE       0     0     0   mirror-0                                     ONLINE       0     0     0     wwn-0xDC_WD10EZEX-08WN4A0_WD-WCC6Y3LRYL6X  ONLINE       0     0     0     wwn-0xDC_WD10EZEX-08WN4A0_WD-WCC6Y2SYFNY2  ONLINE       0     0     0
 errors: No known data errors

查看zfs的檔案

root@pve:/dev/zvol/zdata# zfs list -t volume
 NAME                  USED  AVAIL  REFER  MOUNTPOINT
 zdata/vm-100-disk-0  33.0G   175G    56K  -
 zdata/vm-100-disk-1   683G   142G   683G  -
 zdata/vm-101-disk-0  27.0G   142G  27.0G  -
 zdata/vm-102-disk-0  10.3G   142G  10.3G  -
 zdata/vm-103-disk-0  4.12G   142G  4.12G  -

刪除某個vm, 但是失敗

root@pve:/dev/zvol/zdata# zfs destroy zdata/vm-103-disk-0
 cannot destroy 'zdata/vm-103-disk-0': dataset is busy

停用 multipathd 服務, 然後再刪除看看, 結果成功

root@pve:/dev/zvol/zdata# systemctl stop multipathd.service
 Warning: Stopping multipathd.service, but it can still be activated by:
   multipathd.socket
root@pve:/dev/zvol/zdata# zfs destroy zdata/vm-103-disk-0
root@pve:/dev/zvol/zdata# zfs list -t volume
 NAME                  USED  AVAIL  REFER  MOUNTPOINT
 zdata/vm-100-disk-0  33.0G   179G    56K  -
 zdata/vm-100-disk-1   683G   146G   683G  -
 zdata/vm-101-disk-0  27.0G   146G  27.0G  -
 zdata/vm-102-disk-0  10.3G   146G  10.3G  -
 root@pve:/dev/zvol/zdata# 

發表迴響