• Question on umount: Why is "-l" the magic bullet?

    From Kenny McCormack@21:1/5 to All on Fri May 19 16:43:12 2023
    One thing I've noticed through the years is that if a "umount" command is failing (some error message, e.g., "target is busy" or whatever), you can
    often get around this (*) by adding "-l" to the command line. "-l" is documented as "lazy" (yes, I'm familiar with what the man page says about
    it), but it seems, in practice to be more of a "do it anyway" - a
    functionality that is usually coded as "-f" (meaning "force") or similar.

    Note that umount *does* have a "-f" option, but in my experience that never seems to work; "-f" is documented as having something to do with NFS and I don't use NFS.

    Here's a recent example of this in action (Note that I am describing this
    in detail, but the details don't really matter; I'm not looking for tech support on the problem, but rather an answer to the question in the Subject line). This example is only one of many similar things I've noticed over
    the years.

    --- Example ---
    I have one Linux machine ("Machine #1") that has an external hard disk
    attached as /dev/sda1. I have /dev/sda1 mounted as /mnt/something and have /mnt/something Samba shared on my LAN. Other Linux machines on the LAN
    Samba mount this drive (e.g., "Machine #2"). Now, while trying to fix some other problem, I decided to umount and re-mount the drive (on Machine #1).

    Then I go over to Machine #2 and try to access the Samba-mounted drive and
    get some weird error message. I note that this is probably because of it having been umounted and re-mounted (on Machine #1), so I probably need
    to unmount and remount the Samba connection. But whenever I try to access /mnt/SambaDrive, I get some weird error message (Something like "/mnt/SambaDrive is not a directory"). When I look at it with "ls -lsa",
    the "mode" string in the "ls" output displays as something like
    "d?????????". If I try to umount it (as root or as a user), it says "Not a directory". But, and here's the key: If I umou
  • From Scott Lurndal@21:1/5 to Kenny McCormack on Fri May 19 17:13:23 2023
    gazelle@shell.xmission.com (Kenny McCormack) writes:
    One thing I've noticed through the years is that if a "umount" command is >failing (some error message, e.g., "target is busy" or whatever), you can >often get around this (*) by adding "-l" to the command line. "-l" is >documented as "lazy" (yes, I'm familiar with what the man page says about >it), but it seems, in practice to be more of a "do it anyway" - a >functionality that is usually coded as "-f" (meaning "force") or similar.

    Note that umount *does* have a "-f" option, but in my experience that never >seems to work; "-f" is documented as having something to do with NFS and I >don't use NFS.

    Here's a recent example of this in action (Note that I am describing this
    in detail, but the details don't really matter; I'm not looking for tech >support on the problem, but rather an answer to the question in the Subject >line). This example is only one of many similar things I've noticed over
    the years.

    --- Example ---
    I have one Linux machine ("Machine #1") that has an external hard disk >attached as /dev/sda1. I have /dev/sda1 mounted as /mnt/something and have >/mnt/something Samba shared on my LAN. Other Linux machines on the LAN
    Samba mount this drive (e.g., "Machine #2"). Now, while trying to fix some >other problem, I decided to umount and re-mount the drive (on Machine #1).

    Why on earth would you use Samba for linux-to-linux file sharing?

    Just export the NFS filesystem to the other linux machines directly
    or via autofs (preferred, but generally requires NIS/YP as well).

    In general, the behaviour of either NFS or CIFS when remounting
    an underlying physical drive is somewhat unspecified and should
    generally be avoided. It is possible that using the -o remount
    option to mount rather than using umount/mount may ameleiorate
    your client issues.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lew Pitcher@21:1/5 to Kenny McCormack on Fri May 19 17:19:46 2023
    On Fri, 19 May 2023 16:43:12 +0000, Kenny McCormack wrote:

    One thing I've noticed through the years is that if a "umount" command is failing (some error message, e.g., "target is busy" or whatever), you can often get around this (*) by adding "-l" to the command line. "-l" is documented as "lazy" (yes, I'm familiar with what the man page says about it), but it seems, in practice to be more of a "do it anyway" - a functionality that is usually coded as "-f" (meaning "force") or similar.
    [snip]
    So, my question is: Why does "-l" make it magically work? And thus why
    isn't it the default?

    Forgive the redundancy, but let's first review what the manpage says about
    the umount -l option:

    -l, --lazy
    Lazy unmount. Detach the filesystem from the file hierarchy
    now, and clean up all references to this filesystem as soon as
    it is not busy anymore

    Now, consider this scenario: You have a physically detachable hard drive
    (say a USB drive), mounted, and processes with files open in the mount
    space of this drive.

    You issue the umount command
    umount /where-ever
    and, because there are open files on that mount's mountspace, umount tells
    you that the target is busy. So, you issue the
    umount -l /where-ever
    and the hard drive is lazy unmounted. External commands show that the
    hard drive no longer hosts the mountspace.

    And, now you physically disconnect the hard drive (as you might do with
    an unmounted USB drive). What happens to all the files that processes had
    open on the drive? They certainly did not get whatever close()/sync() housekeeping that a proper close and umount would provide.

    Without "lazy umount", umount won't unmount the media unless all activity
    on that media has cleanly terminated (no files open, no processes using
    the media as cwd).

    With "lazy umount", umount performs /some/ of the umount activity immediatly; removal from the mountpoint means no new file or chdir activity can occur
    on the media. But, "lazy umount" leaves some umount activity (directory updates, buffer flush, etc) to later, as processes terminate. Until those activities complete, it is unsafe to physically disconnect the media, even though it has been "unmounted".

    [snip]

    HTH
    --
    Lew Pitcher
    "In Skills We Trust"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ivan Shmakov@21:1/5 to All on Fri May 19 18:10:45 2023
    On 2023-05-19, Scott Lurndal wrote:

    Just export the NFS filesystem to the other linux machines directly
    or via autofs (preferred, but generally requires NIS/YP as well).

    I've used LDAP for the maps last time I've used Autofs, but
    given that NFSv4 apparently doesn't require submounts to be
    mounted explicitly at the client(s) (though each still needs
    to be exported at the server), I'm not sure it's all that
    necessary anymore.

    Without Autofs, it's along the lines of:

    mysrv.example.com:/srv/nfsv4 /com/nfsv4/mysrv.example.com nfs4 nosuid,nodev other.example.com:/srv/nfsv4 /com/nfsv4/other.example.com nfs4 nosuid,nodev

    In general, the behaviour of either NFS or CIFS when remounting
    an underlying physical drive is somewhat unspecified and should
    generally be avoided. It is possible that using the -o remount
    option to mount rather than using umount/mount may ameleiorate
    your client issues.

    One problem I could think of is that (AIUI) NFSv3 uses
    filesystem ids (i. e., struct statfs f_fsid field values),
    and those aren't necessarily preserved when the filesystems
    get unmounted and mounted back.

    OTOH, NFSv4 uses filesystem "UUIDs," where available, so
    unless you do something funny, unmounting a given ("idle")
    filesystem and mounting it back tends to "just work" IME.

    --
    FSF associate member #7257 http://am-1.org/~ivan/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ivan Shmakov@21:1/5 to All on Fri May 19 17:45:42 2023
    On 2023-05-19, Lew Pitcher wrote:

    (Not sure I quite understand how the question is related to
    Unix /programming./)

    Without "lazy umount", umount won't unmount the media unless all
    activity on that media has cleanly terminated (no files open, no
    processes using the media as cwd).

    With "lazy umount", umount performs /some/ of the umount activity immediatly; removal from the mountpoint means no new file or
    chdir activity can occur on the media. But, "lazy umount" leaves
    some umount activity (directory updates, buffer flush, etc) to
    later, as processes terminate. Until those activities complete,
    it is unsafe to physically disconnect the media, even though it
    has been "unmounted".

    +1. Can only add that you won't be able to remount it back,
    either (until all the files there are closed and such), as
    so far the kernel is concerned, the filesystem's still mounted.

    --
    FSF associate member #7257 http://am-1.org/~ivan/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ivan Shmakov@21:1/5 to All on Fri May 19 19:10:18 2023
    On 2023-05-19, Scott Lurndal wrote:
    Ivan Shmakov <ivan@siamics.netNOSPAM.invalid> writes:

    I've used LDAP for the maps last time I've used Autofs, but given
    that NFSv4 apparently doesn't require submounts to be mounted
    explicitly at the client(s) (though each still needs to be exported
    at the server), I'm not sure it's all that necessary anymore.

    Without Autofs, it's along the lines of:

    mysrv.example.com:/srv/nfsv4 /com/nfsv4/mysrv.example.com nfs4 nosuid,nodev
    other.example.com:/srv/nfsv4 /com/nfsv4/other.example.com nfs4 nosuid,nodev

    (In the client's fstab(5), that is.)

    Along with updating /etc/exports on the server and running the
    exportfs(1) command.

    Yep; exportfs(8) rather, though.

    One problem I could think of is that (AIUI) NFSv3 uses
    filesystem ids (i. e., struct statfs f_fsid field values),
    and those aren't necessarily preserved when the filesystems
    get unmounted and mounted back.

    OTOH, NFSv4 uses filesystem "UUIDs," where available, so
    unless you do something funny, unmounting a given ("idle")
    filesystem and mounting it back tends to "just work" IME.

    Yes. It can't always be counted on, however.

    I don't seem to be aware of any potential issues with doing so,
    provided that UUIDs are properly maintained and such.

    I tend to comment out the filesystem in /etc/exports and re-run
    exportfs to ensure that no clients are using it when remounting
    (which, granted is extremely rare in these modern times).

    Well, alright, Linux doesn't seem to allow me to unmount an
    exported filesystem in the first place. For that matter, it
    doesn't seem to allow me to unexport a filesystem currently
    "used" by a client, either. So the OP scenario isn't something
    I've had to deal with myself, at least in recent years.

    Still, when I've had, say, /srv/nfsv4/foo mounted and exported,
    and I've had /srv/nfsv4 mounted at the client, but foo not
    actually used there at the moment, I've had no issue with
    unexporting and unmounting the former at the server, then
    mounting and exporting the same filesystem back: the client just
    picks the "new" mount up transparently when it's accessed.

    --
    FSF associate member #7257 np. Frog Dreaming by Nodal

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Ivan Shmakov on Fri May 19 18:24:27 2023
    Ivan Shmakov <ivan@siamics.netNOSPAM.invalid> writes:
    On 2023-05-19, Scott Lurndal wrote:

    Just export the NFS filesystem to the other linux machines directly
    or via autofs (preferred, but generally requires NIS/YP as well).

    I've used LDAP for the maps last time I've used Autofs, but
    given that NFSv4 apparently doesn't require submounts to be
    mounted explicitly at the client(s) (though each still needs
    to be exported at the server), I'm not sure it's all that
    necessary anymore.

    Without Autofs, it's along the lines of:

    mysrv.example.com:/srv/nfsv4 /com/nfsv4/mysrv.example.com nfs4 nosuid,nodev >other.example.com:/srv/nfsv4 /com/nfsv4/other.example.com nfs4 nosuid,nodev

    Along with updating /etc/exports on the server and running the exportfs(1) command.


    In general, the behaviour of either NFS or CIFS when remounting
    an underlying physical drive is somewhat unspecified and should
    generally be avoided. It is possible that using the -o remount
    option to mount rather than using umount/mount may ameleiorate
    your client issues.

    One problem I could think of is that (AIUI) NFSv3 uses
    filesystem ids (i. e., struct statfs f_fsid field values),
    and those aren't necessarily preserved when the filesystems
    get unmounted and mounted back.

    OTOH, NFSv4 uses filesystem "UUIDs," where available, so
    unless you do something funny, unmounting a given ("idle")
    filesystem and mounting it back tends to "just work" IME.

    Yes. It can't always be counted on, however. I tend to
    comment out the filesystem in /etc/exports and re-run exportfs
    to ensure that no clients are using it when remounting (which,
    granted is extremely rare in these modern times).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)