<!DOCTYPE html><html><head><title></title><style type="text/css">p.MsoNormal,p.MsoNoSpacing{margin:0}</style></head><body><div>On Mon, Sep 23, 2024, at 11:17, Doug Moen wrote:<br></div><blockquote type="cite" id="qt" style=""><div>> This functionality is reportedly "coming soon", ...<br></div><div><br></div><div>Wow, I didn't realize that RAIDZ Expansion is *still* coming soon.<br></div><div>* In June 2021, Ars Technica announced that "Matthew Ahrens opened a pull request last week" and that "RAIDz expansion will be a thing very soon".<br></div><div>* In November 2023, Phoronix announced that the PR was merged into master. <<a href="https://github.com/openzfs/zfs/pull/15022">https://github.com/openzfs/zfs/pull/15022</a>><br></div><div><br></div><div>More discussion in <<a href="https://github.com/openzfs/zfs/discussions/15232">https://github.com/openzfs/zfs/discussions/15232</a>><br></div><div>* Sep 8 2024: Yes, special thank you to everybody on the OpenZFS side who helped get this one over the finish line. We've got this in our TrueNAS 24.10 BETA1 release already for testing, anticipating updating to OpenZFS 2.3 as soon as it is tagged.<br></div><div><br></div><div>So we can update the ETA to Very Soon.<br></div></blockquote><div><br></div><div>TrueNAS currently has it functioning in their beta release. Note that it works, but results are not the same as you get with mdadm or btrfs.<br></div><div><br></div><div>When you expand a raidz1 vdev from 4 to 5 devices, it moves data as appropriate to the new disk -- but doesn't recalculate the stripe for existing data. So all existing data is still stripe width 4, spread across 5 disks. It's kinda weird. They suggest "rewriting all data" as a workaround, as if that wouldn't cause a snapshot apocalypse.<br></div><div><br></div><div>You still can't change from raidz1 to raidz2, though.<br></div><div><br></div><blockquote type="cite" id="qt" style=""><div>I read something concerning in the final discussion page I linked to.<br></div><div>@yorickdowne said<br></div><div>> Raidz1 with larger drives has a tendency to break during resilver.<br></div><div>> With 14TB disks raidz-1 is risky when rebuilding. I recommend you do a 4-wide raidz2, which you can then expand later if desired.<br></div></blockquote><div><br></div><div>I've got backups if things really fail. But the problem with a 4-wide raidz2, or any combination of mirroring (Bob's suggestion in another mail), is the reduction in overall capacity. I'm limited by SATA ports and power supply, but still want to maximize storage, while reducing the need to reach for backups. raidz1 seems to be a decent trade-off.<br></div><div><br></div><div><div id="sig91988184"><div class="signature">--<br></div><div class="signature"><b>Chris Irwin</b><br></div><div class="signature"><br></div><div class="signature"><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;">email: <a href="mailto:chris@chrisirwin.ca">chris@chrisirwin.ca</a></span><br></div><div class="signature"><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;"> web: <a href="https://chrisirwin.ca">https://chrisirwin.ca</a></span><br></div></div><div><br></div></div><div><br></div></body></html>