We delete old logs in the same pace we write new ones.
So, the disk usage should be constant in a short term. it always increase (as reported by df or zfs list commands).
Although, the disk usage reported by du command remains constant. So I don't think it is an asynchronous-delete or recently-freed issues. or is there something I could do to make the deletions work?
I have already stopped the writing process for several hours, so it is not a load issue neither. It only releases the space of the deleted files when I unmount the filesystem. Besides, is there a command to manually release the space without I need to unmount the filesystem?
Since your pool history doesn't show any file system creations, I'd imagine so.
I've discovered that some objects remain in the root dataset after deleting all the files in it.
Here's the most basic example of Linux (the root filesystem), and it is a 10GB filesystem, with 5.5GB used, 4.6GB available, and it is 55% used (which is pretty easy to see with round numbers like this example).
While my current Linux system has only one filesystem, which is mounted as the root directory (.
I've been able to duplicate this problem again and figured out why I had difficulties previously.
In my test, I'm rsyncinc data from my /etc and /usr directories on to my test dataset.
You can also see that the format of this output is a little different than the format from my Cent OS Linux system.