• Re: Best practice for fresh install on UEFI with multiple disks?

    From Andy Smith@21:1/5 to Boyan Penkov on Thu Sep 19 04:00:01 2024
    Hi,

    On Wed, Sep 18, 2024 at 08:21:10PM -0400, Boyan Penkov wrote:
    So, what are folks doing these days to mirror /efi and /boot?

    [ TL;DR: You already found it - have two separate EFI System
    Partitions, sync one to the other manually using e.g. rsync
    whenever one changes, add paths to both in your UEFI firmware. ]

    I don't think the answer, on Debian, has changed since I asked the
    same question in 2020:

    https://lists.debian.org/debian-user/2020/11/msg00455.html

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Florent Rougon@21:1/5 to All on Thu Sep 19 12:10:01 2024
    Hi,

    Le 19/09/2024, Andy Smith <andy@strugglers.net> a écrit:

    I don't think the answer, on Debian, has changed since I asked the
    same question in 2020:

    https://lists.debian.org/debian-user/2020/11/msg00455.html

    There is a script at [1] to install as, e.g.,
    /etc/grub.d/90_copy_to_boot_efi2, so that it is automatically run every
    time grub updates its configuration file. I believe the script is fine,
    except I would do

    mount /boot/efi2

    rather than

    mount /boot/efi2 || :

    Maybe the intent is for the script not to return a non-zero exit status
    when /boot/efi2 can't be mounted, however in this case I certainly don't
    want the rsync command to be run.

    Regards

    [1] https://wiki.debian.org/UEFI#RAID_for_the_EFI_System_Partition

    --
    Florent

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Woodall@21:1/5 to Florent Rougon on Fri Sep 20 04:30:01 2024
    On Thu, 19 Sep 2024, Florent Rougon wrote:

    Hi,

    Le 19/09/2024, Andy Smith <andy@strugglers.net> a ?crit:

    I don't think the answer, on Debian, has changed since I asked the
    same question in 2020:

    https://lists.debian.org/debian-user/2020/11/msg00455.html

    There is a script at [1] to install as, e.g., /etc/grub.d/90_copy_to_boot_efi2, so that it is automatically run every
    time grub updates its configuration file. I believe the script is fine, except I would do

    mount /boot/efi2

    rather than

    mount /boot/efi2 || :

    Maybe the intent is for the script not to return a non-zero exit status
    when /boot/efi2 can't be mounted, however in this case I certainly don't
    want the rsync command to be run.


    Haven't looked at the script but assuming it's run set -e, then your
    suggestion will fail if it's already mounted.

    Best would be to check that, and unmount again only if the script
    mounted.

    Tim.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Florent Rougon@21:1/5 to All on Fri Sep 20 09:50:01 2024
    Le 20/09/2024, Tim Woodall <debianuser@woodall.me.uk> a écrit:

    Haven't looked at the script but assuming it's run set -e, then your suggestion will fail if it's already mounted.

    Why?

    --
    Florent

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Woodall@21:1/5 to Florent Rougon on Fri Sep 20 12:20:01 2024
    On Fri, 20 Sep 2024, Florent Rougon wrote:

    Le 20/09/2024, Tim Woodall <debianuser@woodall.me.uk> a ?crit:

    Haven't looked at the script but assuming it's run set -e, then your
    suggestion will fail if it's already mounted.

    Why?


    Because the script will abort after the mount fails.

    root@dirac:~# cat test.sh
    #!/bin/bash

    set -e

    mount /boot/efi2

    echo "do important stuff"

    root@dirac:~# ./test.sh
    mount: /boot/efi2: /dev/sda2 already mounted on /boot/efi2.
    dmesg(1) may have more information after failed mount system call.


    Note that do important stuff is never reached.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Florent Rougon@21:1/5 to All on Fri Sep 20 12:50:01 2024
    Le 20/09/2024, Tim Woodall <debianuser@woodall.me.uk> a écrit:

    Because the script will abort after the mount fails.

    root@dirac:~# cat test.sh
    #!/bin/bash

    set -e

    mount /boot/efi2

    echo "do important stuff"

    root@dirac:~# ./test.sh
    mount: /boot/efi2: /dev/sda2 already mounted on /boot/efi2.
    dmesg(1) may have more information after failed mount system call.


    Note that do important stuff is never reached.

    That's interesting because my system doesn't behave the same. I had of
    course checked, before writing my first message, that 'mount /boot/efi2' returns exit status 0 even when /boot/efi2 is already mounted. With your
    script (called foo.sh here), here is what I get:

    # mount | grep efi2
    /dev/sda1 on /boot/efi2 type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
    # /tmp/foo.sh
    do important stuff
    # mount | grep efi2
    /dev/sda1 on /boot/efi2 type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
    /dev/sda1 on /boot/efi2 type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
    #

    Every invocation adds a new, duplicate entry in the output of 'mount'.

    This is Debian sid amd64; /usr/bin/mount is from 'mount' package version 2.40.2-8.

    Regards

    --
    Florent

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Woodall@21:1/5 to Florent Rougon on Fri Sep 20 20:20:01 2024
    This message is in MIME format. The first part should be readable text,
    while the remaining parts are likely unreadable without MIME-aware tools.

    On Fri, 20 Sep 2024, Florent Rougon wrote:

    Le 20/09/2024, Tim Woodall <debianuser@woodall.me.uk> a écrit:

    Because the script will abort after the mount fails.

    root@dirac:~# cat test.sh
    #!/bin/bash

    set -e

    mount /boot/efi2

    echo "do important stuff"

    root@dirac:~# ./test.sh
    mount: /boot/efi2: /dev/sda2 already mounted on /boot/efi2.
    dmesg(1) may have more information after failed mount system call.


    Note that do important stuff is never reached.

    That's interesting because my system doesn't behave the same. I had of
    course checked, before writing my first message, that 'mount /boot/efi2' returns exit status 0 even when /boot/efi2 is already mounted. With your script (called foo.sh here), here is what I get:

    # mount | grep efi2
    /dev/sda1 on /boot/efi2 type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
    # /tmp/foo.sh
    do important stuff
    # mount | grep efi2
    /dev/sda1 on /boot/efi2 type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
    /dev/sda1 on /boot/efi2 type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
    #

    Every invocation adds a new, duplicate entry in the output of 'mount'.

    This is Debian sid amd64; /usr/bin/mount is from 'mount' package version 2.40.2-8.


    That's very interesting and looks like it's probably a kernel change.

    Tim.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Florent Rougon@21:1/5 to All on Mon Sep 30 18:30:01 2024
    Hi,

    Le 30/09/2024, Boyan Penkov <boyan.penkov@gmail.com> a écrit:

    -- If I have multiple drives, do I modify the script to have multiple
    efi2, efi3, ..., efiX ?

    I think yes.

    -- it seems that the script above privileges /boot/efi over /boot/efi2
    -- in this case, if /boot/efi becomes corrupted, won't this just copy
    the errors to /boot/efi2 and thus destroy it as well, on the next run?

    My understanding of how the script was designed is the following:
    - if the disk containing /boot/efi is fine, no problem using it as the
    “master copy”;
    - if it has a silent corruption problem, we're screwed and the
    corruption may be copied to other disks, but that's already the case
    with other partitions in a raid (nowadays there are consistency
    checks...);
    - if it has a problem that is visible enough for the md layer to
    remove the disk from the array, then /boot/efi won't be a mount
    point anymore, so the script will do nothing from this point on.
    Thus, you can boot from another disk until you have a replacement
    drive; during this time (and unless you changed /etc/fstab), the
    script won't sync anything.

    HTH, regards

    --
    Florent

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)