One thing I've noticed through the years is that if a "umount" command is >failing (some error message, e.g., "target is busy" or whatever), you can >often get around this (*) by adding "-l" to the command line. "-l" is >documented as "lazy" (yes, I'm familiar with what the man page says about >it), but it seems, in practice to be more of a "do it anyway" - a >functionality that is usually coded as "-f" (meaning "force") or similar.
Note that umount *does* have a "-f" option, but in my experience that never >seems to work; "-f" is documented as having something to do with NFS and I >don't use NFS.
Here's a recent example of this in action (Note that I am describing this
in detail, but the details don't really matter; I'm not looking for tech >support on the problem, but rather an answer to the question in the Subject >line). This example is only one of many similar things I've noticed over
the years.
--- Example ---
I have one Linux machine ("Machine #1") that has an external hard disk >attached as /dev/sda1. I have /dev/sda1 mounted as /mnt/something and have >/mnt/something Samba shared on my LAN. Other Linux machines on the LAN
Samba mount this drive (e.g., "Machine #2"). Now, while trying to fix some >other problem, I decided to umount and re-mount the drive (on Machine #1).
One thing I've noticed through the years is that if a "umount" command is failing (some error message, e.g., "target is busy" or whatever), you can often get around this (*) by adding "-l" to the command line. "-l" is documented as "lazy" (yes, I'm familiar with what the man page says about it), but it seems, in practice to be more of a "do it anyway" - a functionality that is usually coded as "-f" (meaning "force") or similar.[snip]
So, my question is: Why does "-l" make it magically work? And thus why
isn't it the default?
On 2023-05-19, Scott Lurndal wrote:
Just export the NFS filesystem to the other linux machines directly
or via autofs (preferred, but generally requires NIS/YP as well).
In general, the behaviour of either NFS or CIFS when remounting
an underlying physical drive is somewhat unspecified and should
generally be avoided. It is possible that using the -o remount
option to mount rather than using umount/mount may ameleiorate
your client issues.
On 2023-05-19, Lew Pitcher wrote:
Without "lazy umount", umount won't unmount the media unless all
activity on that media has cleanly terminated (no files open, no
processes using the media as cwd).
With "lazy umount", umount performs /some/ of the umount activity immediatly; removal from the mountpoint means no new file or
chdir activity can occur on the media. But, "lazy umount" leaves
some umount activity (directory updates, buffer flush, etc) to
later, as processes terminate. Until those activities complete,
it is unsafe to physically disconnect the media, even though it
has been "unmounted".
On 2023-05-19, Scott Lurndal wrote:
Ivan Shmakov <ivan@siamics.netNOSPAM.invalid> writes:
I've used LDAP for the maps last time I've used Autofs, but given
that NFSv4 apparently doesn't require submounts to be mounted
explicitly at the client(s) (though each still needs to be exported
at the server), I'm not sure it's all that necessary anymore.
Without Autofs, it's along the lines of:
mysrv.example.com:/srv/nfsv4 /com/nfsv4/mysrv.example.com nfs4 nosuid,nodev
other.example.com:/srv/nfsv4 /com/nfsv4/other.example.com nfs4 nosuid,nodev
Along with updating /etc/exports on the server and running the
exportfs(1) command.
One problem I could think of is that (AIUI) NFSv3 uses
filesystem ids (i. e., struct statfs f_fsid field values),
and those aren't necessarily preserved when the filesystems
get unmounted and mounted back.
OTOH, NFSv4 uses filesystem "UUIDs," where available, so
unless you do something funny, unmounting a given ("idle")
filesystem and mounting it back tends to "just work" IME.
Yes. It can't always be counted on, however.
I tend to comment out the filesystem in /etc/exports and re-run
exportfs to ensure that no clients are using it when remounting
(which, granted is extremely rare in these modern times).
On 2023-05-19, Scott Lurndal wrote:
Just export the NFS filesystem to the other linux machines directly
or via autofs (preferred, but generally requires NIS/YP as well).
I've used LDAP for the maps last time I've used Autofs, but
given that NFSv4 apparently doesn't require submounts to be
mounted explicitly at the client(s) (though each still needs
to be exported at the server), I'm not sure it's all that
necessary anymore.
Without Autofs, it's along the lines of:
mysrv.example.com:/srv/nfsv4 /com/nfsv4/mysrv.example.com nfs4 nosuid,nodev >other.example.com:/srv/nfsv4 /com/nfsv4/other.example.com nfs4 nosuid,nodev
In general, the behaviour of either NFS or CIFS when remounting
an underlying physical drive is somewhat unspecified and should
generally be avoided. It is possible that using the -o remount
option to mount rather than using umount/mount may ameleiorate
your client issues.
One problem I could think of is that (AIUI) NFSv3 uses
filesystem ids (i. e., struct statfs f_fsid field values),
and those aren't necessarily preserved when the filesystems
get unmounted and mounted back.
OTOH, NFSv4 uses filesystem "UUIDs," where available, so
unless you do something funny, unmounting a given ("idle")
filesystem and mounting it back tends to "just work" IME.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 489 |
Nodes: | 16 (2 / 14) |
Uptime: | 17:04:13 |
Calls: | 9,665 |
Files: | 13,712 |
Messages: | 6,167,821 |