2013-03-24, 17:34
Hi there,
Every day using Linux you learn something new, and it seems today was no different.
DISCLAIMER:
Please note that, in the case explained below, my drives are configured using EXT2 or EXT4, and the solution is specific for ext systems. The fix described below will work with any ext2, ext3 or ext4 file system. For others, like ReiserFS, Reiser4, XFS, JFS of whatever you may use, this will be different (specially the tune2fs command). Windows file systems don't have this behaviour.
END OF DISCLAIMER
I've had a problem with FlexRAID displaying wrong the available free space, and from there learned that even DF was giving out apparently incorrect information.
At first sight, this look absolutely normal, however, if you check closely, for example HD_02:
1877791 - 1771357 = 106434, while df shows only 11048 is free. You can check your output too and compare the different figures.
There's an explanation about this: in linux, when you created a new filesystem, it seems like 5% of the free amount is reserved for system dutties (syslogs and so on), an you can't write data to that 5%. In a 2TB disk, that's about 100GB, with formating a so on, probably about 90GB. That's a whole lot of space, but even worst, for a pure data disk, that's not necessary at all!
You can, of course, change this behaviour and reserve less space for this, even 0. I've used 0.05%, which should give nearly 1GiB reserved space, should be enough. To change that, you can use the following command (in this case, I set reserved space to 0.05% on /dev/sdb1). I would advise to first unmount the partition which you are going to change, to be on the safe side.
After changing the reserved space, the output is now quite different, with about 280GB extra free space across my 3 2TB Hard drives, which is quite a lot, don't you think so?
You can read a bit more about this, then you can decide if you want to take advantage or not.
http://askubuntu.com/questions/79981/df-...free-space
http://unix.stackexchange.com/questions/...system-why
http://oss.sgi.com/archives/xfs/2001-07/msg00136.html
Every day using Linux you learn something new, and it seems today was no different.
DISCLAIMER:
Please note that, in the case explained below, my drives are configured using EXT2 or EXT4, and the solution is specific for ext systems. The fix described below will work with any ext2, ext3 or ext4 file system. For others, like ReiserFS, Reiser4, XFS, JFS of whatever you may use, this will be different (specially the tune2fs command). Windows file systems don't have this behaviour.
END OF DISCLAIMER
I've had a problem with FlexRAID displaying wrong the available free space, and from there learned that even DF was giving out apparently incorrect information.
Code:
#df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/sdd1 1877791 1771357 11048 100% /flexraid_drives/HD_02
/dev/sdb1 1877793 1770639 11768 100% /flexraid_drives/HD_03
/dev/sdc1 1877792 1778356 4050 100% /flexraid_drives/HD_01
FlexRAIDFS 5633375 5320351 26865 100% /media/FlexRAID_POOL
At first sight, this look absolutely normal, however, if you check closely, for example HD_02:
1877791 - 1771357 = 106434, while df shows only 11048 is free. You can check your output too and compare the different figures.
There's an explanation about this: in linux, when you created a new filesystem, it seems like 5% of the free amount is reserved for system dutties (syslogs and so on), an you can't write data to that 5%. In a 2TB disk, that's about 100GB, with formating a so on, probably about 90GB. That's a whole lot of space, but even worst, for a pure data disk, that's not necessary at all!
You can, of course, change this behaviour and reserve less space for this, even 0. I've used 0.05%, which should give nearly 1GiB reserved space, should be enough. To change that, you can use the following command (in this case, I set reserved space to 0.05% on /dev/sdb1). I would advise to first unmount the partition which you are going to change, to be on the safe side.
Code:
sudo tune2fs -m 0.05 /dev/sdb1
After changing the reserved space, the output is now quite different, with about 280GB extra free space across my 3 2TB Hard drives, which is quite a lot, don't you think so?
Code:
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/sdc1 1877792 1778356 98482 95% /flexraid_drives/HD_01
/dev/sdd1 1877791 1771357 105481 95% /flexraid_drives/HD_02
/dev/sdb1 1877793 1772131 104709 95% /flexraid_drives/HD_03
FlexRAIDFS 5633375 5321843 308671 95% /media/FlexRAID_POOL
You can read a bit more about this, then you can decide if you want to take advantage or not.
http://askubuntu.com/questions/79981/df-...free-space
http://unix.stackexchange.com/questions/...system-why
http://oss.sgi.com/archives/xfs/2001-07/msg00136.html