• Re: Problem With Old Zyxel NSA 221 NASs & Seagate HDs - Part 2 - PART S

    From Java Jive@21:1/5 to Java Jive on Wed Jun 4 15:43:43 2025
    XPost: alt.os.linux

    On 2025-06-02 13:44, Java Jive wrote:

    Searching on "Magic mismatch, very weird" comes up with some threads.
    One is
    hardware failure, the other is about using a non-1k blocksize with
    (old) mke2fs
    and a 2007-era ramdisk implementation that doesn't support other than 1k:
    https://sourceforge.net/p/e2fsprogs/bugs/175/#b0df

    Perhaps you could try -b1024 on the mkfs.ext2 command?  Or experiment
    with
    other blocksizes?

    Thanks, may try that later this afternoon,

    And it worked, adding the -b1024 parameter makes my copying manually the original procedure work! Thanks for that.

    I'm making some progress now, I've managed to clean up Zyxel's original
    scripts somewhat, the originals gave lots of spurious errors in 'dmesg'.
    However, the fundamental plan, of doing an automatic reboot one time
    only if no storage is detected, didn't work from 'init', I think because
    'init' can not be broken into, apparently not even programmatically.
    The 'Rebooting' message is displayed, but no reboot actually occurs.

    So I tried moving that bit of code to rcS, but I still can't get it to
    reboot. Again all the messages are correctly displayed, but no reboot
    actually occurs.

    Still the above problem has been solved thanks to your help, much obliged.

    --

    Fake news kills!

    I may be contacted via the contact address given on my website:
    www.macfh.co.uk

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Java Jive@21:1/5 to Java Jive on Fri Jun 6 12:25:35 2025
    XPost: alt.os.linux

    On 2025-06-04 15:43, Java Jive wrote:
    On 2025-06-02 13:44, Java Jive wrote:

    Searching on "Magic mismatch, very weird" comes up with some threads.
    One is
    hardware failure, the other is about using a non-1k blocksize with
    (old) mke2fs
    and a 2007-era ramdisk implementation that doesn't support other than
    1k:
    https://sourceforge.net/p/e2fsprogs/bugs/175/#b0df

    Perhaps you could try -b1024 on the mkfs.ext2 command?  Or experiment
    with
    other blocksizes?

    Thanks, may try that later this afternoon,

    And it worked, adding the -b1024 parameter makes my copying manually the original procedure work!  Thanks for that.

    [...]

    So I tried moving that bit of code to rcS, but I still can't get it to reboot.  Again all the messages are correctly displayed, but no reboot actually occurs.

    I now have this fully working. If it's of any interest here's the code
    from rcS. If on first boot, less than 2 HDs are found, it's sets a flag
    in the U-boot environment, which survives a reboot, and then does a
    reboot. On the second boot, it wipes the reboot flag and carries on the
    boot regardless of how many HDs are found. In my case, the reboot
    allows the second HD to be detected during the second boot, so the XFS
    storage area spread across both HDs becomes available.

    [Beware unintended line wrap, and note that the variables ECHO, WC, etc
    contain the full initrd path to the binaries concerned]

    # Check for HDs
    ${ECHO} "Checking for found hard drives ..."
    HDs="$(${SGMAP}|${GREP} 'ATA'|${WC} -l)"
    if [ "${HDs}" -lt 2 ]
    then
    case "${HDs}" in
    0) ${ECHO} "WARNING: No hard drives found!"
    ;;
    1) ${ECHO} "WARNING: Only 1 hard drive found!"
    ;;
    esac
    ${ECHO} "Checking firmware for reboot flag ..."
    REBOOTED="$(${PRINTENV} ${REBOOTFLG})"
    if [ -z "${REBOOTED}" ] || [ "${REBOOTED}" == "## Error: \"${REBOOTFLG}\" not defined" ]
    then
    # Set flag and reboot
    ${SETENV} ${REBOOTFLG} true
    ${ECHO} "Rebooting to try to pick up slow-spin-up drives ..."
    # The following command is valid according to the help parameter, but fails
    # ${UMOUNT} -a
    ${SLEEP} 5
    ${REBOOT}
    exit
    else
    # This is already a reboot, but still less than two HDs
    # nothing further that can be done here
    ${ECHO} "Less than 2 hard drives found even after reboot"
    fi
    else
    ${ECHO} "2 hard drives found!"
    fi
    # 2 hard drives were found or this is already a reboot
    # so just clear the flag and continue
    ${ECHO} "Clearing reboot flag in firmware ..."
    ${SETENV} ${REBOOTFLG}


    --

    Fake news kills!

    I may be contacted via the contact address given on my website:
    www.macfh.co.uk

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lew Pitcher@21:1/5 to Java Jive on Fri Jun 6 12:45:13 2025
    XPost: alt.os.linux

    On Fri, 06 Jun 2025 12:25:35 +0100, Java Jive wrote:

    [snip]

    I now have this fully working. If it's of any interest here's the code
    from rcS. If on first boot, less than 2 HDs are found, it's sets a flag
    in the U-boot environment, which survives a reboot, and then does a
    reboot. On the second boot, it wipes the reboot flag and carries on the
    boot regardless of how many HDs are found. In my case, the reboot
    allows the second HD to be detected during the second boot, so the XFS storage area spread across both HDs becomes available.

    [snip]
    ${SETENV} ${REBOOTFLG} true
    ${ECHO} "Rebooting to try to pick up slow-spin-up drives ..."
    # The following command is valid according to the help parameter, but fails
    # ${UMOUNT} -a

    Yah, assuming ${UMOUNT} resolves to something like /bin/umount, then
    ${UMOUNT} -a
    probably would fail here. Primarily while trying to umount the filesystem
    that has your scripts cwd, and (because the umount failure left that
    filesystem still mounted) the root filesystem.


    Remember, umount can't unmount an active mountpoint (one with mountspoints, open files or directories on it), and

    a) your script's cwd is most likely located in one of the filesystems
    mentioned in /etc/mtab (and, of course, open, because your active
    process lives in that cwd),

    b) / is probably in your /etc/mtab, and can't be umounted until all
    the filesystems that reside on it are umounted, and

    c) your use of the -a option effectively asks umount to unmount /all/
    filesystems listed in /etc/mtab ("except the proc filesystem")

    [snip]

    HTH
    --
    Lew Pitcher
    "In Skills We Trust"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to Andy Burns on Fri Jun 6 13:02:22 2025
    XPost: alt.os.linux

    On 2025-06-06, Andy Burns wrote:
    Java Jive wrote:

    I now have this fully working.

    Now, how long until the drives fail :-P

    If it's anything like my luck, they actually failed 3 weeks ago, and all
    of this fighting is BECAUSE the drives are bad :)


    --
    |_|O|_|
    |_|_|O| Github: https://github.com/dpurgert
    |O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to Java Jive on Fri Jun 6 13:54:21 2025
    XPost: alt.os.linux

    Java Jive wrote:

    I now have this fully working.

    Now, how long until the drives fail :-P

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Java Jive@21:1/5 to Lew Pitcher on Fri Jun 6 16:46:44 2025
    XPost: alt.os.linux

    On 2025-06-06 13:45, Lew Pitcher wrote:
    On Fri, 06 Jun 2025 12:25:35 +0100, Java Jive wrote:

    [snip]

    I now have this fully working. If it's of any interest here's the code
    from rcS. If on first boot, less than 2 HDs are found, it's sets a flag
    in the U-boot environment, which survives a reboot, and then does a
    reboot. On the second boot, it wipes the reboot flag and carries on the
    boot regardless of how many HDs are found. In my case, the reboot
    allows the second HD to be detected during the second boot, so the XFS
    storage area spread across both HDs becomes available.

    [snip]
    ${SETENV} ${REBOOTFLG} true
    ${ECHO} "Rebooting to try to pick up slow-spin-up drives ..."
    # The following command is valid according to the help parameter, but fails >> # ${UMOUNT} -a

    Yah, assuming ${UMOUNT} resolves to something like /bin/umount, then
    ${UMOUNT} -a
    probably would fail here. Primarily while trying to umount the filesystem that has your scripts cwd, and (because the umount failure left that filesystem still mounted) the root filesystem.


    Remember, umount can't unmount an active mountpoint (one with mountspoints, open files or directories on it), and

    a) your script's cwd is most likely located in one of the filesystems
    mentioned in /etc/mtab (and, of course, open, because your active
    process lives in that cwd),

    b) / is probably in your /etc/mtab, and can't be umounted until all
    the filesystems that reside on it are umounted, and

    c) your use of the -a option effectively asks umount to unmount /all/
    filesystems listed in /etc/mtab ("except the proc filesystem")

    Thanks for the explanation, which has led me to look back into PuTTY's
    log files investigating further. I think your explanation probably does
    fit my current situation, because I've now reinstated the command, and
    this is the result as of now ...

    BusyBox v1.17.2 (2017-09-14 21:33:20 BST) multi-call binary.

    Usage: umount [OPTIONS] FILESYSTEM|DIRECTORY

    umount: can't umount /proc: Device or resource busy

    ... which originally confused me because seeing an abbreviated
    explanation of the usage and not noticing the last line led me to
    believe that the '-a' parameter had not been accepted. However, I have
    another log file of apparently the same command used in the same
    situation that contains only the last line above, which is a much more reasonable message. In both cases, the system does still reboot.
    However, there are other places in the boot scripts, particularly
    Zyxel's original scripts, where 'umount -a' appears to fail completely
    and just displays the help, here's an example of that ...

    Usage: umount [-hV]
    umount -a [-f] [-r] [-n] [-v] [-t vfstypes] [-O opts]
    umount [-f] [-r] [-n] [-v] special | node...

    ... so I'm not really sure what is going on in that case, perhaps an
    invisible character such as a non-breaking space has found its way into
    the script. Generally, the command's output is somewhat confusing and
    seems to have been rather poorly written, at least in the cut-down
    BusyBox version used on this NAS box.

    --

    Fake news kills!

    I may be contacted via the contact address given on my website:
    www.macfh.co.uk

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)