Incremental size way bigger than expected after fstrim #139
-
Version used Describe the bug -rw-r--r-- 1 root root 13G Oct 3 01:12 sda.full.data Expected behavior Hypervisor information:
Logfiles: Workaround: |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments
-
The Backup operates on blocklevel and i think fstrim results in changed blocks. So qemu marks these blocks as dirty and they end up in the Backup (as they are Part of the qcow bitmap as in marked „dirty“). I dont know if there is a way to Tell qemu to „discard“ changes done by fstrim. I dont think i can change anything in virtnbdbackup to behave different. skipping blocks marked as dirty is Not an option.. results in unusable disks After restore. Maybe libvirt/qemu projects have some documentation on that. |
Beta Was this translation helpful? Give feedback.
-
Fair enough. I thought that may be the case here, it was just a slight hope that maybe you encountered this and found a fix for such cases. |
Beta Was this translation helpful? Give feedback.
-
I dont have a Solution (Other than timing fstrim with full backups) .. Or check libvirt/qemu docs for options that might help |
Beta Was this translation helpful? Give feedback.
-
other solutions Based on dirty bitmaps have the same „Issue“: https://forum.proxmox.com/threads/huge-dirty-bitmap-after-sunday.110233/ So id say it works as designed. Maybe running fstrim on the host makes more Sense. |
Beta Was this translation helpful? Give feedback.
The Backup operates on blocklevel and i think fstrim results in changed blocks. So qemu marks these blocks as dirty and they end up in the Backup (as they are Part of the qcow bitmap as in marked „dirty“). I dont know if there is a way to Tell qemu to „discard“ changes done by fstrim. I dont think i can change anything in virtnbdbackup to behave different.
skipping blocks marked as dirty is Not an option.. results in unusable disks After restore.
Maybe libvirt/qemu projects have some documentation on that.