• RAID60 question

    From Greg@21:1/5 to All on Sun Dec 1 13:40:02 2024
    Hi there,

    I'm setting up MD-RAID0 on a top of HW-RAID6 devices (long story). I
    would like to confirm the following:

    1. The RAID0 chunk size should be the stripe width of the
    underlying RAID6 volumes.

    2. The RAID0 metadata should be at the end of the device (metadata ver.
    1.0).

    3. The stride and stripe-width of the ext4 fs should be set to the once
    used when creating RAID6 volumes.

    Thanks in advance for any help
    Greg

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to Greg on Sun Dec 1 19:30:02 2024
    On 12/1/24 04:27, Greg wrote:
    Hi there,

    I'm setting up MD-RAID0 on a top of HW-RAID6 devices (long story). I
    would like to confirm the following:

    1. The RAID0 chunk size should be the stripe width of the
    underlying RAID6 volumes.

    2. The RAID0 metadata should be at the end of the device (metadata ver.
    1.0).

    3. The stride and stripe-width of the ext4 fs should be set to the once
    used when creating RAID6 volumes.

    Thanks in advance for any help
    Greg


    I have a SOHO network and I implemented file sharing many years ago.
    Around 2019, I switched from md to ZFS. The learning curve has been non-trivial, but I am pleased with the results and expect ZFS will be a
    better solution going forward.


    Regarding data migration in general, my previous approach had been
    in-place using one server and minimal disks (e.g. "cheap"). An operator
    error on a prior migration resulted in loss of some archival backups.
    So, I threw money at the problem this time around -- buy another server,
    buy more disks, build the new server, migrate the data, make the new
    server primary, rebuild the old server, and make the old server
    secondary. And, backups before, during, and after. So, more time and
    money, less risk, no data loss, and rebalanced data/ maximum performance afterwards.


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Greg@21:1/5 to David Christensen on Sun Dec 1 20:00:01 2024
    On 12/1/24 19:19, David Christensen wrote:
    On 12/1/24 04:27, Greg wrote:
    Hi there,

    I'm setting up MD-RAID0 on a top of HW-RAID6 devices (long story). I
    would like to confirm the following:

    1. The RAID0 chunk size should be the stripe width of the
    underlying RAID6 volumes.

    2. The RAID0 metadata should be at the end of the device (metadata
    ver. 1.0).

    3. The stride and stripe-width of the ext4 fs should be set to the
    once used when creating RAID6 volumes.

    Thanks in advance for any help
    Greg


    I have a SOHO network and I implemented file sharing many years ago.
    Around 2019, I switched from md to ZFS.  The learning curve has been non-trivial, but I am pleased with the results and expect ZFS will be a better solution going forward.


    Regarding data migration in general, my previous approach had been
    in-place using one server and minimal disks (e.g. "cheap").  An operator error on a prior migration resulted in loss of some archival backups.
    So, I threw money at the problem this time around -- buy another server,
    buy more disks, build the new server, migrate the data, make the new
    server primary, rebuild the old server, and make the old server
    secondary.  And, backups before, during, and after.  So, more time and money, less risk, no data loss, and rebalanced data/ maximum performance afterwards.

    I'm familiar with ZFS. Thanks for your suggestions but I have to stick
    to RAID60 (long story, like I said). I would be grateful for an answer
    to my original question.

    Regards
    Greg

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Paoli@21:1/5 to pld@sojka.co on Mon Dec 2 04:50:01 2024
    If you're not fully sure of any of those parameters, I'd suggest ...
    do some testing. E.g. if your target storage is quite large, maybe do only
    2% of that, or possibly less - in total (approximate) size, but using the presumed or being tested stripe/cluster/etc. sizes. And test the performance, and may want to use, e.g. sync option on mounted filesystems or the
    like, to mostly
    cancel out any host caching performance advantages, and get (at least closer to)
    the actual drive (+RAID, etc.) performance, and how it would mostly behave on cache misses (especially for writes, which will be your lowest performance
    for RAID6/RAID60). Can also inspect data on underlying devices (at least
    as far down as one can go) with, e.g. od ... can even put marker patterns in data to more easily identify exactly what data is landing where on the back-end storage. And as far as the md metadata, I believe it puts it at the
    start, rather
    than end. In any case, I'd probably be inclined to go with the
    default - likely less
    confusing for anyone (e.g. even future you) to figure out exactly how
    it's laid out,
    if/when that becomes a question. In any case, likewise, can test that, e.g. scaled down, and examine the resultant data and where it lands.
    And can use partitioning or losetup, etc. to limit the size of the target to less than the full physical available, e.g. for testing.

    And ... though not quite what you asked, device mapper, dmsetup, etc.
    can be used to construct somewhat arbitrary layouts ... but that might
    be even more confusing for anyone looking at it later. Sometimes, however
    that can be quite useful for special circumstances/requirements. Thinking of which, I not too long ago did that for demonstration purposes to help
    someone out solving a data migration issue. They essentially wanted to go from quite large hardware RAID-5 to md raid5 - quite similar set of drives for each (new ones for the md set). Conceptually I basically thought layer RAID-1
    atop that, sync, then break the mirrors. That's not quite so easy as most any (especially software) RAID-1 would typically want to write metadata on the same devices - very undesirable in that case. So, I did it with device
    mapper using dmsetup,
    essentially mirroring the underlying, while storing the metadata
    external to those
    devices. Anyway, quite a bit more detail on that example run here: https://lists.balug.org/mailman3/hyperkitty/list/balug-talk@lists.balug.org/message/CGZUVCF5WFM5I6GPKK5NW5DDK4OCMERK/

    On Sun, Dec 1, 2024 at 4:36 AM Greg <pld@sojka.co> wrote:

    Hi there,

    I'm setting up MD-RAID0 on a top of HW-RAID6 devices (long story). I
    would like to confirm the following:

    1. The RAID0 chunk size should be the stripe width of the
    underlying RAID6 volumes.

    2. The RAID0 metadata should be at the end of the device (metadata ver.
    1.0).

    3. The stride and stripe-width of the ext4 fs should be set to the once
    used when creating RAID6 volumes.

    Thanks in advance for any help
    Greg

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to Jeffrey Walton on Tue Dec 3 01:40:01 2024
    On 12/2/24 00:02, Jeffrey Walton wrote:
    On Sun, Dec 1, 2024 at 3:47 PM David Christensen <dpchrist@holgerdanske.com> wrote:

    On 12/1/24 04:27, Greg wrote:
    Hi there,

    I'm setting up MD-RAID0 on a top of HW-RAID6 devices (long story). I
    would like to confirm the following:

    1. The RAID0 chunk size should be the stripe width of the
    underlying RAID6 volumes.

    2. The RAID0 metadata should be at the end of the device (metadata ver.
    1.0).

    3. The stride and stripe-width of the ext4 fs should be set to the once
    used when creating RAID6 volumes.

    I have a SOHO network and I implemented file sharing many years ago.
    Around 2019, I switched from md to ZFS. The learning curve has been
    non-trivial, but I am pleased with the results and expect ZFS will be a
    better solution going forward.


    Regarding data migration in general, my previous approach had been
    in-place using one server and minimal disks (e.g. "cheap"). An operator
    error on a prior migration resulted in loss of some archival backups.
    So, I threw money at the problem this time around -- buy another server,
    buy more disks, build the new server, migrate the data, make the new
    server primary, rebuild the old server, and make the old server
    secondary. And, backups before, during, and after. So, more time and
    money, less risk, no data loss, and rebalanced data/ maximum performance
    afterwards.

    ???

    What questions did you answer?

    Jeff


    I was providing anecdotal information for the OP and other readers to
    consider, because sometimes we lose sight of the forest for the trees.


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)