[kwlug-disc] btrfs(raid5) or ext4/mdadm(raid5) ?
B. S.
bs27975 at yahoo.ca
Thu Jun 4 00:00:52 EDT 2015
Others can chime in with more expertise, but ...
Keep your thinking separate (for your sanity).
Data availability, reliability, and disk concatenation, are all separate and very different beasties.
btrfs uses mdadm / raid5 in exactly the same way ext# uses mdadm / raid5.
(Plus more, but weirdness and complexity start.)
Think btrfs as merely a replacement for ext#, bringing a lot of wonderful goodies with it. Like checksumming.
Which is to say ... it's a good, easy, start, and if that's all you ever do with btrfs then you're ahead of the game for little effort.
When ready / interested, expand your use of btrfs beyond for more wonderful goodness.
[Some people argue that a good nightly rsync / mondoarchive backup to a different physical system, plus a btrfs partition, negates the need for raid5, at home, anyways.]
{RAID5 = data availability, not the same thing as data reliability. (At home, lose your drives and you're going to be all over it, beating upon it until it's done. So I question whether the complexity of RAID brings anything to that party.)}
(RAID is also often used for disk concatenation, but there are a number of ways to skin that cat, without RAID or mdadm.)
Lori can chime in as to whether one can (superficially?) think of zfs in the same way, but IIRC, zfs straddles this boundary.
Now what's best beyond this line, mdadm, lvadm (which uses mdadm?), or something else ... now the confusion starts.
btrfs brings to such line crossing SuchBeasties (tm) parties additional goodness, but you can grow into that.
> Is BTRFS with raid5 options better than EXT4/MDADM for handling bad
> sectors?
Aside from the wrinkle, at least with btrfs when the disk runs out of self-mapping replacement sectors, the next bad block will throw an error. You might fail to write that block, but all your other current data will be safe. (Whether you notice or not, ext# or btrfs, depends upon where it hits and whether you're monitoring your logging. On a seldom used data file, you may never notice, within a kernel update write failure ... you'll notice, either fs.)
Wrinkle being, I forget ... on a bad sector, will you always be able to read it? It's only upon write (any fs?) that the failure is detected? Which is to say, if a sector goes bad on a .txt block you never use or change, does it matter or will you ever notice?
Last time I was in your situation, disks getting questionable ... I bought a replacement disk of such size to replace all and some growth. I moved the data over, 'converting' from ext4 to btrfs in the process, and haven't looked back since. Once done I redeployed the questionable disk, with btrfs, and also have not looked back since. For my own sanity, too, I no longer have fs that cross disk boundaries. I just ln -s things like /data upon growth. (And mount common cifs shares across the network to /data, etc. Thus one master copy for the network, rsync'ed/mondoarchived nightly.)
Which is to say, I suggest avoiding the complexity of in place conversions. K.I.S.S.
I'll also suggest always (now, with btrfs) use a system (ext#) partition, btrfs'ing the rest. I believe the change I mentioned above bit me later when a disk became problematic and the system partition being btrfs meant that no access to remedial tools were available within the kernel / system space available. (Being ext# means all the current intelligence for problematic disk handling stays effective, such as being able to repair the system partition so it can then mount the btrfs data and call upon btrfs tools within it.)
----- Original Message -----
> From: William Park <opengeometry at yahoo.ca>
> To: kwlug-disc at kwlug.org
> Cc:
> Sent: Wednesday, June 3, 2015 8:53 PM
> Subject: [kwlug-disc] btrfs(raid5) or ext4/mdadm(raid5) ?
>
> Hi all,
>
> Since BTRFS has become meeting topics lately...
>
> I have EXT4 filesystem on top of MDADM raid5. I'm very comfortable with
> this setup, because I know all the commands. Lately, though, it's
> showing bad sectors, but the read errors are "corrected", so far.
>
> Replacing the suspect disks is the correct thing to do. But, I would
> like to continue to use the disks until they really die, because it's
> only personal stuffs, and it's good excuse to try out something else.
>
> BTRFS has raid features built into the filesystem. So, my question is,
> Is BTRFS with raid5 options better than EXT4/MDADM for handling bad
> sectors?
More information about the kwlug-disc
mailing list