Yesterday, I had a chance to rebuild our NAS system which consisted of 6 disks (with the smaller one being a 1.5 TB SATA III 7.5k rpm Seagate Barracuda, the bigger one being a 2TB SATA III 5.9k rpm Seagate Barracuda Green). I took this opportunity to document the necessary steps in rebuilding this system, since the last time I created a soft RAID5 system, I did not fully document the process.
Each of the disks was partitioned with a single partition layout spanning from the first block till the last block, using partition id fd (linux auto-raid). I used the old msdos boot label, since none of the disk had more than 2 TB space anyway.
The following command was used to create the RAID5 array:
mdadm --create /dev/md0 --chunk=512 --level=5 --raid-devices=6 /dev/sd[abcdef]1
I was using chuck (or stripe size) of 512Kb because this NAS would likely to contain more multimedia files (photos, videos, etc) that were bigger than 1 MB anyway.
Linux would rebuild the RAID5 (it took whole day/night), and when finished, the /proc/mdstat showed (after rebooted the system first since I wanted to make sure that in the event of power failure, when the system started up again, the array would be discovered automatically):
Personalities : [raid6] [raid5] [raid4]
md127 : active (auto-read-only) raid5 sdc1[2] sdd1[3] sdf1[6] sda1[0] sde1[4] sdb1[1]
7325675520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
Note: auto-read-only was shown because there were no data written to the array yet.
Output of hdparm -t was:
/dev/md127:
Timing buffered disk reads: 1538 MB in 3.00 seconds = 512.63 MB/sec
/dev/md127:
Timing buffered disk reads: 1552 MB in 3.00 seconds = 516.71 MB/sec
/dev/md127:
Timing buffered disk reads: 1542 MB in 3.00 seconds = 513.72 MB/sec
Now came the bussiness of creating the filesystem, in this case XFS (who wouldn’t?).
[root@nas-1 ~]# mkfs.xfs -d su=512k,sw=5 -l lazy-count=1,version=2 /dev/md127
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md127 isize=256 agcount=32, agsize=57231872 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=1831418880, imaxpct=5
= sunit=128 swidth=640 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@nas-1 ~]# xfs_admin -L ARRAY0 /dev/md127
Ok, so I opted to use internal log in case I had a need to move this array to another system. I used su=512k,sw=5 because the stripe size of this array was 512k (chunk size of /dev/md127) and the array contained 5 data segment + 1 parity segment (that was my thought, anybody is free to corrent me if I was wrong).
The /etc/fstab for the intended filesystem was:
LABEL=ARRAY0 /mnt/array0 xfs logbufs=8,logbsize=256k,noatime,nodiratime,attr2,nobarrier,largeio,grpquota 1 2
That is it.
#1 by cool articles about animals on 30/06/2012 - 09:31
Hi i am kavin, its my first time to commenting anywhere, when i read this piece of
writing i thought i could also make comment due to
this good article.
#2 by Kevin Schlichter on 17/08/2012 - 10:54
http://ubuntuforums.org/showthread.php?t=1764861
You might check out that link to get your array to show up as md0 instead of md127.