Filesystem tests
Look also here:
Note: you should really read above articles. There are some important details. I don't repeat them here. You can also compare our results. So, here is my own benchmark. Take the files ! The machine specs:
CPU: Intel i7 920 (@2.67GHz - default),
HDD (sda): Samsung HD103SJ (main system disk)
HDD (sdb): Samsung HD103SJ (tested disk, T=30~32°C during tests),
Controller: Intel SATA 82801JI (ICH10)
RAM: 12GB GEIL 1066MHz
The system specs:
System: Gentoo,
Arch: x86_64,
CFLAGS: -O2 -march=native,
kernel: 2.6.31-zen7,
GCC: 4.3.4
All the tests were the same as for Justin Piszcz. Some more filesystems were added. Filesystems tested: ext2, ext3, ext4, ext4dev, reiserfs, reiser4, jfs, xfs, btrfs and ntfs (using ntfs-3g). Note: ALL filesystems were created using their native, default values. The only options added to the command line was -y or similar options, to avoid any delays during fs creation. Therefore benchmark is not showing MY answering time :)
Tests carried out (as J. Piszcz did)
- Create 10,000 files with touch in a directory.
- Run 'find' on that directory.
- Remove the directory.
- Create 10,000 directories with mkdir in a directory.
- Run 'find' on that directory.
- Remove the directory containing the 10,000 directories.
- Copy kernel tarball from other disk to test disk.
- Copy kernel tarball from test disk to other disk.
- Untar kernel tarball on the same disk.
- Tar kernel tarball on the same disk.
- Remove kernel source tree.
- Copy kernel tarball 10 times.
- Create 1GB file from /dev/zero.
- Copy the 1GB file on the same disk.
- Split a 10MB file into 1000/1024/2048/4096/8192 byte pieces.
- Copy kernel source tree on the same disk.
- Cat a 1GB file to /dev/null.
My own additions (by Piotao):
- Filesystem creation time.
- Overall average comparison.
Results
All times are given in seconds. Some graphs are enlarged to allow a comparison between similar timings.
Create 10,000 files with touch in a directory
Run find on that directory
Remove the directory
Please note the scale of graphs.
Create 10,000 directories with mkdir in a directory
Run find on that directory
Remove the directory containing the 10,000 directories
Copy kernel tarball from other disk to test disk
Copy kernel tarball from test disk to other disk
Untar kernel tarball on the same disk
Tar kernel tarball on the same disk
Remove kernel source tree
Copy kernel tarball 10 times
Create 1GB file from /dev/zero
Copy the 1GB file on the same disk
Split a 10MB file into 1000/1024/2048/4096/8192 byte pieces
Split by 1000
Split by 1024
Split by 2048
Split by 4096
Split by 8192
Copy kernel source tree on the same disk
Cat a 1GB file to /dev/null
NEW: added by Piotao (me)
Filesystem creation time
Filesystem overall timings (time of all tests)
Filesystem overall timings
Time of all tests, w/o fs creation time
Closing words
Those tests are not related to J.Piszcz tests. They are independent and made just for comparison. I planned also to check btrfs and compare it with the others. For sure credits for J.Piszcz should be given for his work, and the whole idea is great.
The results presented above might seem to be a bit biased. However I didn't want to prove that any particular filesystem is better than the other one. It was JUST A TEST. I've made it as plain as I can. I've had a new, unoccupied machine with not-so-important setup, so I thought why not to make something. Also, please note that a real efficiency of NTFS is probably much better or just different than presented here. The slowdown can be caused because of fuse and ntfs-3g. I can't really prepare equal test procedure under windows, because there is no such command like find, or cat to /dev/null. So, any test of NTFS would be a bit uncomparable.
If you would like to make some comments or suggestions, feel free to contact me. Use my homepage.