After an irregular restart, the raid 5 array became unreachable. D’oh!
# cat /proc/mdstat
$ cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md3 : inactive sdd3[3](S) sde3[4](S) sdf3[6](S) sdg3[5](S)
6546850144 blocks super 1.2
As you can see all sd[cdefg]2 partitions are (S)pare.
First, we had to stop the array:
# mdadm --stop /dev/md3
and after we have to reassamle the array:
# mdadm --assemble /dev/md3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3 /dev/sdg3 --verbose
After, we have to see like this:
$ mdadm --assemble /dev/md3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3 /dev/sdg3 --verbose
mdadm: looking for devices for /dev/md3
mdadm: Fail create md3 when using /sys/module/md_mod/parameters/new_array
mdadm: /dev/sdc3 is identified as a member of /dev/md3, slot 0.
mdadm: /dev/sdd3 is identified as a member of /dev/md3, slot 2.
mdadm: /dev/sde3 is identified as a member of /dev/md3, slot 3.
mdadm: /dev/sdf3 is identified as a member of /dev/md3, slot -1.
mdadm: /dev/sdg3 is identified as a member of /dev/md3, slot 1.
mdadm: added /dev/sdg3 to /dev/md3 as 1
mdadm: added /dev/sdd3 to /dev/md3 as 2
mdadm: added /dev/sde3 to /dev/md3 as 3
mdadm: added /dev/sdf3 to /dev/md3 as -1
mdadm: added /dev/sdc3 to /dev/md3 as 0
mdadm: /dev/md3 has been started with 4 drives and 1 spare.
cool part of it:
mdadm: /dev/md3 has been started with 4 drives and 1 spare.
let’s check the result:
$ cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md3 : active raid5 sdc3[0] sdf3[6](S) sde3[4] sdd3[3] sdg3[5]
4910137344 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/13 pages [0KB], 65536KB chunk
And it done, huh 🙂