• From SSD to NVME

    From Hans@21:1/5 to All on Mon Dec 2 17:50:01 2024
    Hi folks,

    as my old notebook died, I ntend to buy a new notebook.
    The old one has got a SSD drive, the new one an NVME.

    I want to clone the whole system 1 to 1 to the new NVME.

    In my /etc/fstab I am using UUID entries instead of /dev/sdX.
    The new one then would have /dev/nvme* as entries (that is clear), but if I am using only UUID, the question:

    Will the UUID change at clone, even when the partitions are not changed in a bit of size? IMHO the UUID will not change, but I am not quite sure.

    When cloning from SSD to SSD this is working, but I have no experience when cloning from SSD to NVME.

    Thanks for a short feedback.

    Best

    Hans

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Greg Wooledge@21:1/5 to Hans on Mon Dec 2 18:00:03 2024
    On Mon, Dec 02, 2024 at 17:49:18 +0100, Hans wrote:
    I want to clone the whole system 1 to 1 to the new NVME.

    Will the UUID change at clone, even when the partitions are not changed in a bit of size? IMHO the UUID will not change, but I am not quite sure.

    Depends on what you mean by "clone". If you mean a bit-for-bit copy
    using dd or an equivalent, then you're correct. The file system UUID
    will be copied along with all the other bits of the old file system.

    If you mean "create a new file system on the new drive, then rsync
    the files over", then the file system UUID will not be the same. Unless
    of course you specifically go out of your way to copy the UUID as well.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans@21:1/5 to All on Mon Dec 2 18:10:01 2024
    Hi Greg,
    Depends on what you mean by "clone". If you mean a bit-for-bit copy
    using dd or an equivalent, then you're correct. The file system UUID
    will be copied along with all the other bits of the old file system.
    I mean clone bit by bit. The software I am using is "Clonezilla" which depends on partclone and dd.


    If you mean "create a new file system on the new drive, then rsync
    the files over", then the file system UUID will not be the same. Unless
    of course you specifically go out of your way to copy the UUID as well.
    No, not rsync. This would be an option, but only if the above method is
    failing (i.e. target drive is smaller than source drive).

    Hans

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bret Busby@21:1/5 to Hans on Mon Dec 2 18:50:01 2024
    On 3/12/24 00:49, Hans wrote:
    Hi folks,

    as my old notebook died, I ntend to buy a new notebook.
    The old one has got a SSD drive, the new one an NVME.

    I want to clone the whole system 1 to 1 to the new NVME.

    In my /etc/fstab I am using UUID entries instead of /dev/sdX.
    The new one then would have /dev/nvme* as entries (that is clear), but if I am
    using only UUID, the question:

    Will the UUID change at clone, even when the partitions are not changed in a bit of size? IMHO the UUID will not change, but I am not quite sure.

    When cloning from SSD to SSD this is working, but I have no experience when cloning from SSD to NVME.

    Thanks for a short feedback.

    Best

    Hans


    If you simply clone the system from one hardware system to another, are
    you confident that it will work?

    I expect that the two different hardware systems would require separate
    sets of drivers and configurations for those drivers.

    Also, depending on the operating system and packages versions, you could
    end up with a frankenstein system.

    Will the two primary drives be the same, in terms of total hard drive
    capacity, partition sizes and formatted/usable capacities?

    Will the UEFI partitions on each system, be compatible?

    It seems to me, to be making a mess.

    I believe (and, I am no expert, and, this list will have much more knowledgeable people than me, available) that it would be simpler, to
    install the latest versions and packages of whatever you have/had on
    your older system, on your new system, and, then create your partitions,
    and copy data to corresponding partitions.

    What you are intending to do, reminds me of a movie that I once watched,
    named Pet Semetary (sic).

    ..
    Bret Busby
    Armadale
    West Australia
    (UTC+0800)
    ..............

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From basti@21:1/5 to All on Mon Dec 2 19:40:02 2024
    Am 02.12.24 um 17:49 schrieb Hans:
    Hi folks,

    as my old notebook died, I ntend to buy a new notebook.
    The old one has got a SSD drive, the new one an NVME.

    I want to clone the whole system 1 to 1 to the new NVME.

    In my /etc/fstab I am using UUID entries instead of /dev/sdX.
    The new one then would have /dev/nvme* as entries (that is clear), but if I am
    using only UUID, the question:

    Will the UUID change at clone, even when the partitions are not changed in a bit of size? IMHO the UUID will not change, but I am not quite sure.

    When cloning from SSD to SSD this is working, but I have no experience when cloning from SSD to NVME.

    Thanks for a short feedback.

    Best

    Hans





    If you have LVM you can create a PV on the NVME and add then to the VG.
    After that move the LV to the new PV and remove the old SSD from the VG.
    Don't forget to update the initram.

    Best Regards

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Mon Dec 2 19:50:01 2024
    Hans composed on 2024-12-02 11:49 (UTC-0500):

    as my old notebook died, I ntend to buy a new notebook.
    The old one has got a SSD drive, the new one an NVME.

    I want to clone the whole system 1 to 1 to the new NVME.

    In my /etc/fstab I am using UUID entries instead of /dev/sdX.
    The new one then would have /dev/nvme* as entries (that is clear), but if I am
    using only UUID, the question:

    Will the UUID change at clone, even when the partitions are not changed in a bit of size? IMHO the UUID will not change, but I am not quite sure.

    When cloning from SSD to SSD this is working, but I have no experience when cloning from SSD to NVME.

    You may find it necessary to regenerate your UEFI BIOS boot entry in NVRAM using
    efibootmgr.

    You very likely would need to add drivers to your initrds first, else have to rescue boot to rebuild after:

    # inxi -Sd
    System:
    Host: ab250 Kernel: 6.1.0-25-amd64 arch: x86_64 bits: 64
    Console: pty pts/0 Distro: Debian GNU/Linux 12 (bookworm)
    Drives:
    Local Storage: total: 476.94 GiB used: 60.56 GiB (12.7%)
    ID-1: /dev/nvme0n1 vendor: Patriot model: M.2 P300 512GB size: 476.94 GiB
    Optical-1: /dev/sr0 vendor: Optiarc model: DVD RW AD-7200S
    dev-links: cdrom
    Features: speed: 48 multisession: yes audio: yes dvd: yes
    rw: cd-r,cd-rw,dvd-r,dvd-ram
    # lsinitramfs /boot/initrd.img-6.1.0-25-amd64 | egrep 'nvme|ata|ahci|piix' usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme-core.ko usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme.ko usr/lib/udev/ata_id
    usr/bin/fatattr
    #

    I forgot to do it first the last time.

    Oh, and I never use UUIDs, only LABELs.
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Erwan David@21:1/5 to All on Mon Dec 2 20:20:01 2024
    Le 02/12/2024 à 19:41, Bruno Schneider a écrit :
    In my /etc/fstab I am using UUID entries instead of /dev/sdX.
    The new one then would have /dev/nvme* as entries (that is clear), but if I am
    using only UUID, the question:
    I would recommend changing from UUID to labels. Doing so, all you need
    to worry is that the new partitions have the same labels as the old
    ones.
    https://wiki.debian.org/fstab#Labels

    On a side note, last time I tried to install Debian on NVME, it
    wouldn't even find the storage device. I hope this has improved since
    then.

    In 2019 I installed debian on nvme without any problem...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans@21:1/5 to All on Mon Dec 2 20:30:01 2024
    Yes, I read in other debian threads abnout Labels. What is the advantage of Labels to UUID? I alwaqys thought, labels can be easily changed and then at boot, linux would mount some other partition with the same label.

    But it will be rather difficult, to create a partition with the same UUID (but other size and content) of an existent (except of cloning, of course).

    Using labels seem to be rather unsecure in my opinion.

    Hans
    In 2019 I installed debian on nvme without any problem...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans@21:1/5 to All on Mon Dec 2 20:20:02 2024
    Thank you all for your response.

    Just to explain: I have only "standard" partitions. One for /boot, /, /usr, /var and /home. Most of them are luks encrypted.

    This cloning I did often ovetr the years. My debian is rather old (means, first install years ago, but it was of course upgraded) and during the years, I cloned it from mechanical harddrive to SSD, then to a bigger SSD and so on.

    This worked well and without any issues using clonezilla, resizing with
    gparted and resize2fs intelligently.

    Although, first it was a change from /dev/hdaX to /dev/sdaX, this was well and easlily done until I changed to UUID. Even with this, the cloning worked perfectly without any flaws.

    But /dev/hda and /dev /sda are very similar, except of the naming scheme.

    But I never used NVME drives before and know (shame on me!) not much about it. If NVME are only super fast SSD's, then it will be easy, but if NVME are a complete alien hardware, then I might come in trouble (Nothing, that can not
    be fixed!).

    So I asked here, maybe someone did the already the same, I intend to do and could give me some clues.

    In the next days I get my new notebook and will report of my success.

    Maybe it will be helpfull for other people, too.

    Have fun!

    Hans

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans@21:1/5 to All on Mon Dec 2 20:30:01 2024
    If you simply clone the system from one hardware system to another, are
    you confident that it will work?
    Yes.

    I expect that the two different hardware systems would require separate
    sets of drivers and configurations for those drivers.
    Nope, kernel knows.
    Also, depending on the operating system and packages versions, you could
    end up with a frankenstein system.

    Will the two primary drives be the same, in terms of total hard drive capacity, partition sizes and formatted/usable capacities?

    Yes, they will. But it is sensefull to resize the partitions for your needs (using gparted), as the newer harddrive is mostly bigger than the old one.
    This works without data loss.

    Will the UEFI partitions on each system, be compatible?


    Yes.
    It seems to me, to be making a mess.

    Nope, if you make it correctly:
    1. Clone
    2. Gparted resize partitions to your needs.
    3. Use resize2fs with all partitions.

    Workas also with a combination of Windows and Linux (I also have Windows on my harddrive, and Linux (multi partitions, some of then encrypted).

    I believe (and, I am no expert, and, this list will have much more knowledgeable people than me, available) that it would be simpler, to
    install the latest versions and packages of whatever you have/had on
    your older system, on your new system, and, then create your partitions,
    and copy data to corresponding partitions.

    What you are intending to do, reminds me of a movie that I once watched, named Pet Semetary (sic).

    ..
    Bret Busby
    Armadale
    West Australia
    (UTC+0800)
    ..............

    Best

    Hans

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andrew M.A. Cater@21:1/5 to Hans on Mon Dec 2 20:50:01 2024
    On Mon, Dec 02, 2024 at 05:49:18PM +0100, Hans wrote:
    Hi folks,

    as my old notebook died, I ntend to buy a new notebook.
    The old one has got a SSD drive, the new one an NVME.

    I want to clone the whole system 1 to 1 to the new NVME.


    It might be easier to produce a clean new install and then just rsync
    data from the SSD drive to the appropriate directories on the NVME.


    In my /etc/fstab I am using UUID entries instead of /dev/sdX.
    The new one then would have /dev/nvme* as entries (that is clear), but if I am
    using only UUID, the question:

    Will the UUID change at clone, even when the partitions are not changed in a bit of size? IMHO the UUID will not change, but I am not quite sure.


    I'm fairly sure this was brought up just about at the end of last month.

    When cloning from SSD to SSD this is working, but I have no experience when cloning from SSD to NVME.


    Hoping to keep partition sizes etc. identical across drives is hard so it
    does seem easier to just copy data from one drive to the other.

    Thanks for a short feedback.


    All the very best, as ever,

    Andy
    (amacater@debian.org)

    Best

    Hans





    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to Bret Busby on Mon Dec 2 21:30:01 2024
    Hi,

    On Tue, Dec 03, 2024 at 01:49:12AM +0800, Bret Busby wrote:
    If you simply clone the system from one hardware system to another, are you confident that it will work?

    You clearly aren't, but I think Hans should be, yes.

    Worst cxase is that Hans ends up with something that doesn't boot, but
    Hans would still have the thing that boots, so this is pretty low risk.

    I expect that the two different hardware systems would require separate sets of drivers and configurations for those drivers.

    Sure. But There is only one NVMe driver in Linux and it will be baked in
    to any recent kernel. Hans would have had top go out of their way to make
    a custom kernel that won't work.

    The other potential stumbling block is that sometimes a system's legacy
    BIOS can't boot off of NVMe while its UEFI can. That's entirely outside
    the world of Linux though.

    Also, depending on the operating system and packages versions, you could end up with a frankenstein system.

    There is no mention in Hans's email of different versions of Debian
    being involved here.

    Will the two primary drives be the same, in terms of total hard drive capacity, partition sizes and formatted/usable capacities?

    Will the UEFI partitions on each system, be compatible?

    None of that matters. If it boots now, it will boot afterwards as long
    as the machine supports booting off of NVMe and the kernel has the drivers.

    I believe (and, I am no expert, and, this list will have much more knowledgeable people than me, available) that it would be simpler, to
    install the latest versions and packages of whatever you have/had on your older system, on your new system, and, then create your partitions, and copy data to corresponding partitions.

    Huge waste of time I'm afraid.

    What you are intending to do, reminds me of a movie that I once watched, named Pet Semetary (sic).

    Changing a storage drive reminds you of a horror movie? Okay, it may be
    time to put the Internet away…

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Mon Dec 2 21:40:02 2024
    Hans composed on 2024-12-02 20:20 (UTC+0100):

    Yes, I read in other debian threads abnout Labels. What is the advantage of Labels to UUID? I alwaqys thought, labels can be easily changed and then at boot, linux would mount some other partition with the same label.

    But it will be rather difficult, to create a partition with the same UUID (but
    other size and content) of an existent (except of cloning, of course).

    Using labels seem to be rather unsecure in my opinion.

    Labels not intended to be unique enough would indeed pose a threat to filesystems.
    Mine are unique enough to pose nominal risk. I typically make up a LABEL based upon some substring from the disk's serial and/or model number, a shorthand name
    of the OS/version or usage of the filesystem, and the partition number, 5-13 characters I can remember and type from a Grub prompt, unlike a UUID. Nothing forces use of special characters, upper case, lower case, numbers or the like as
    with online passwords. Use whatever works in your brain. When cloning, tools are
    readily available to re-unique labels, e.g. tune2fs -L. I clone often, part of backup strategy.
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to Hans on Mon Dec 2 21:20:01 2024
    Hi,

    [ Beware not making clear that you mean FILESYSTEM labels and UUIDs
    in this thread. It's been a week since we've had massive
    misunderstanding of what filesystem UUIDs are and every mention of
    UUID or LABEL without that context risks invoking a very confused
    person who is prepared to write 100 emails on the subject. ]

    On Mon, Dec 02, 2024 at 08:20:23PM +0100, Hans wrote:
    Yes, I read in other debian threads abnout Labels. What is the advantage of Labels to UUID?

    Filesystem labels are easier for humans to read than filesystem UUIDs.

    I alwaqys thought, labels can be easily changed and then at
    boot, linux would mount some other partition with the same label.

    I don't really understand your second part but it is as easy to change a filesystem label as it is to change a filesystem UUID.

    But it will be rather difficult, to create a partition with the same UUID (but
    other size and content) of an existent (except of cloning, of course).

    It's easy to set a specific filesystem UUID so if you really want to you
    can easily set a new filesystem to have the same UUID as an existing filesystem. Nothing will warn you or stop you. However since it is so
    unnatural to type, perhaps it is less easy to do so *accidentally*.

    I think the distinction would be that it isn't a usual procedure to
    ever *set* a filesystem UUID since they are normally *generated*, whereas
    it is quite common to set a filesystem LABEL.

    Using labels seem to be rather unsecure in my opinion.

    I don't understand why they would be insecure, unless you meant
    "dangerous" and even then, I can only understand it from the point of
    view of it being easier to accidentally set more than one the same.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans@21:1/5 to All on Mon Dec 2 21:50:01 2024
    Am Montag, 2. Dezember 2024, 21:18:05 CET schrieb Andy Smith:
    Hi,

    [ Beware not making clear that you mean FILESYSTEM labels and UUIDs
    in this thread. It's been a week since we've had massive
    misunderstanding of what filesystem UUIDs are and every mention of
    UUID or LABEL without that context risks invoking a very confused
    person who is prepared to write 100 emails on the subject. ]

    Hi Andy,

    maybe I understood some thing not correct (because I am German), and my
    meaning of "Label" is not your meaning of "Label".

    That, what i understand as label is the name, I give a partition. For example, in gparted, I can give a partition a label like I want. For example, my
    Windows partition can get a label like "windows", "win11", "shitty_windows" or whatever, or my datapartition maybe labelled "space1".

    Is it that, what we are talking about? If yes, then I believe, it might be
    easy (or with some efforts), to hang in an usb-drive with a special label, which will then be booted, as the label of the usb-drive can be found in /etc/ fstab.

    But it will be much more difficiult, to create an usb-drive with the same UUID as can be found in /etc/fstab.

    That was my point, but maybe, as said before, we are talking of different kind of labels.

    Have fun!

    Hans

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to Hans on Mon Dec 2 22:30:01 2024
    Hi,

    On Mon, Dec 02, 2024 at 09:47:05PM +0100, Hans wrote:
    That, what i understand as label is the name, I give a partition. For example,
    in gparted, I can give a partition a label like I want. For example, my Windows partition can get a label like "windows", "win11", "shitty_windows" or
    whatever, or my datapartition maybe labelled "space1".

    Yeah, so, already we are off in the weeds. 🙁 But in that case I'm
    glad I said something!

    Lots of things can have UUIDs and lots of things can have LABELs. Sadly
    when we start to talk about storage a number of those things are
    involved so people get confused easily about which one is being talked
    about.

    I think you're talking about PARTLABELs, which parted refers to as
    "partition names". Those are held inside the GPT and refer to each
    partition independent of the contents of that partition. So you could
    nuke the contents of the partition and it would still show as having
    that PARTLABEL.

    Filesystems can also have labels. They are like Filesystem UUIDs. If you destroyed the filesystem you would destroy its label. In fstab you can
    refer to them with LABEL= instead of UUID=. You can set them with a tool
    like "e2label" or at creation time (mkfs.ext4 -L mylabel …").

    Aside from LABEL confusion there is also UUID confusion, since
    partitions in a GPT will have UUIDs as well! See /dev/disk/by-partuuid/.
    In fstab you can use these GPT features with PARTLABEL= and PARTUUID=.

    We have threads here where tens of messages go by before the
    participants realise they are talking about two different kinds of LABEL
    or UUID.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Tue Dec 3 03:00:01 2024
    Sent: Monday, December 02, 2024 at 2:40 PM
    From: "Andrew M.A. Cater" <amacater@einval.com>
    To: debian-user@lists.debian.org
    Subject: Re: From SSD to NVME

    On Mon, Dec 02, 2024 at 05:49:18PM +0100, Hans wrote:
    Hi folks,

    as my old notebook died, I ntend to buy a new notebook.
    The old one has got a SSD drive, the new one an NVME.

    I want to clone the whole system 1 to 1 to the new NVME.


    It might be easier to produce a clean new install and then just rsync
    data from the SSD drive to the appropriate directories on the NVME.

    No it is better that everything comes over all at one time



    In my /etc/fstab I am using UUID entries instead of /dev/sdX.
    The new one then would have /dev/nvme* as entries (that is clear), but if I am
    using only UUID, the question:

    Will the UUID change at clone, even when the partitions are not changed in a
    bit of size? IMHO the UUID will not change, but I am not quite sure.


    I'm fairly sure this was brought up just about at the end of last month.

    It depends upon if you created a partition table, partitions and filesystems on the drive.

    I create the drive layout on the drive then rsync the old drive to the new drive.
    Then I fixup the PARTUUID in the /etc/fstab and boot loader.
    If I am using Archlinux or my own custom build os I have a blank /etc/fstab and /etc/hosts

    cat /etc/fstab
    # Static information about the filesystems.
    # See fstab(5) for details.

    # <file system> <dir> <type> <options> <dump> <pass>

    cat /etc/hosts
    # Static table lookup for hostnames.
    # See hosts(5) for details.

    [alarm@alarm ~]$ blkid
    /dev/nvme0n1p1: LABEL_FATBOOT="bootfs" LABEL="bootfs" UUID="5A88-04BC" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="b2c58878-01"
    /dev/nvme0n1p2: LABEL="rootfs" UUID="5170097f-f1f6-42d8-a2ff-8938cbdfa7be" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="b2c58878-02"


    When cloning from SSD to SSD this is working, but I have no experience when cloning from SSD to NVME.


    Hoping to keep partition sizes etc. identical across drives is hard so it does seem easier to just copy data from one drive to the other.

    dd is your friend

    https://www.howtoforge.com/linux-dd-command-clone-disk-practical-example/ https://thelinuxcode.com/clone-disk-using-dd-linux/

    $ ls -l /dev/nvme0*
    crw------- 1 root root 245, 0 Dec 2 16:14 /dev/nvme0
    brw-rw---- 1 root disk 259, 0 Dec 2 16:14 /dev/nvme0n1
    brw-rw---- 1 root disk 259, 1 Dec 2 16:14 /dev/nvme0n1p1
    brw-rw---- 1 root disk 259, 2 Dec 2 16:14 /dev/nvme0n1p2

    https://wiki.archlinux.org/title/Solid_state_drive/NVMe https://www.linuxoperatingsystem.net/nvme-command-line-in-linux-a-deep-guide-for-beginners-and-advanced-users/
    https://superuser.com/questions/1449499/why-does-linux-list-nvme-drives-as-dev-nvme0-instead-of-dev-sda

    --
    Hindi madali ang maging ako

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to Hans on Tue Dec 3 05:50:01 2024
    On 12/2/24 08:49, Hans wrote:
    Hi folks,

    as my old notebook died, I ntend to buy a new notebook.
    The old one has got a SSD drive, the new one an NVME.

    I want to clone the whole system 1 to 1 to the new NVME.

    In my /etc/fstab I am using UUID entries instead of /dev/sdX.
    The new one then would have /dev/nvme* as entries (that is clear), but if I am
    using only UUID, the question:

    Will the UUID change at clone, even when the partitions are not changed in a bit of size? IMHO the UUID will not change, but I am not quite sure.

    When cloning from SSD to SSD this is working, but I have no experience when cloning from SSD to NVME.

    Thanks for a short feedback.

    Best

    Hans


    On 12/2/24 08:59, Hans wrote:
    <snip>
    I mean clone bit by bit. The software I am using is "Clonezilla"
    which depends
    on partclone and dd.

    <snip>
    No, not rsync. This would be an option, but only if the above method is failing (i.e. target drive is smaller than source drive).


    On 12/2/24 11:14, Hans wrote:
    Thank you all for your response.

    Just to explain: I have only "standard" partitions. One for /boot, /,
    /usr,
    /var and /home. Most of them are luks encrypted.

    This cloning I did often ovetr the years. My debian is rather old
    (means, first
    install years ago, but it was of course upgraded) and during the years, I cloned it from mechanical harddrive to SSD, then to a bigger SSD and
    so on.

    This worked well and without any issues using clonezilla, resizing with gparted and resize2fs intelligently.

    Although, first it was a change from /dev/hdaX to /dev/sdaX, this was
    well and
    easlily done until I changed to UUID. Even with this, the cloning worked perfectly without any flaws.

    But /dev/hda and /dev /sda are very similar, except of the naming scheme.

    But I never used NVME drives before and know (shame on me!) not much
    about it.
    If NVME are only super fast SSD's, then it will be easy, but if NVME
    are a
    complete alien hardware, then I might come in trouble (Nothing, that
    can not
    be fixed!).

    So I asked here, maybe someone did the already the same, I intend to
    do and
    could give me some clues.

    In the next days I get my new notebook and will report of my success.

    Maybe it will be helpfull for other people, too.


    On 12/2/24 11:20, Hans wrote:
    Yes, I read in other debian threads abnout Labels. What is the
    advantage of
    Labels to UUID? I alwaqys thought, labels can be easily changed and
    then at
    boot, linux would mount some other partition with the same label.

    But it will be rather difficult, to create a partition with the same
    UUID (but
    other size and content) of an existent (except of cloning, of course).

    Using labels seem to be rather unsecure in my opinion.


    I suspect your old laptop uses BIOS firmware, the SSD is SATA and uses
    MBR partitioning, and the Debian instance on the old SSD contains
    bootloaders for BIOS.


    New laptops are going to use UEFI firmware and the NVMe SSD is going to
    use GPT partitioning. Some new firmware have a "compatibility" mode
    that allows them to boot old-style BIOS/MBR disks; your new laptop may
    or may not have this feature. If it does, a USB-SATA adapter cable may
    allow you to boot your old laptop SSD on your new laptop (which could
    solve any immediate needs that you have).


    That said, I think your best option is to remove the SATA SSD from your
    old laptop and get a USB-SATA adapter cable. When the new laptop comes
    in, use Clonezilla to back up the old laptop SATA SSD and to back up the
    new NVMe SSD. Then reset the firmware to factory defaults, boot the
    Debian installer, do a secure erase of the NVMe SSD, and do a fresh
    install of Debian onto the NVMe SSD. This approach has the best chance
    of giving you a Debian installation that is compatible with your new
    laptop and that is performant. Once Debian is running on the NVMe SSD,
    use the USB-SATA adapter to connect the old SSD and copy over the files
    you want (via a GUI file manager, rsync(1), or whatever).


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From tomas@tuxteam.de@21:1/5 to Andy Smith on Tue Dec 3 06:40:01 2024
    On Mon, Dec 02, 2024 at 09:22:05PM +0000, Andy Smith wrote:
    Hi,

    On Mon, Dec 02, 2024 at 09:47:05PM +0100, Hans wrote:
    That, what i understand as label is the name, I give a partition. For example,
    in gparted, I can give a partition a label like I want. For example, my Windows partition can get a label like "windows", "win11", "shitty_windows" or
    whatever, or my datapartition maybe labelled "space1".

    Yeah, so, already we are off in the weeds. 🙁 But in that case I'm
    glad I said something!

    "There are several levels of labels" (TM)

    ;-)

    (and of course, of UUIDs and things)

    Cheers
    --
    t

    -----BEGIN PGP SIGNATURE-----

    iF0EABECAB0WIQRp53liolZD6iXhAoIFyCz1etHaRgUCZ06XOgAKCRAFyCz1etHa Ri8MAJ9f9aykgU/9a943sjBd5tMf7rU8aACfQVoHIXipdQ4yBAOudxgeQHT22JM=
    =ICXl
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andrew M.A. Cater@21:1/5 to pocket@homemail.com on Tue Dec 3 10:00:01 2024
    On Tue, Dec 03, 2024 at 02:55:07AM +0100, pocket@homemail.com wrote:


    It might be easier to produce a clean new install and then just rsync
    data from the SSD drive to the appropriate directories on the NVME.

    No it is better that everything comes over all at one time


    As someone else has put it elsewhere in the thread: new laptop means
    new drivers, potentially moving from legacy MBR to UEFI ... easier in
    many ways to put a clean install of Debian on from new media to start
    with (also wiping out whatever was there before if it came preinstalled
    with Windows or whatever).


    I'm fairly sure this was brought up just about at the end of last month.

    It depends upon if you created a partition table, partitions and filesystems on the drive.

    I create the drive layout on the drive then rsync the old drive to the new drive.
    Then I fixup the PARTUUID in the /etc/fstab and boot loader.
    If I am using Archlinux or my own custom build os I have a blank /etc/fstab and /etc/hosts

    There's more than one way to do it: if you absolutely know what partition
    sizes you want, maybe - LVM and one partition is a fairly sensible starting point because partitions will grow and shrink, for example.

    cat /etc/fstab
    # Static information about the filesystems.
    # See fstab(5) for details.

    # <file system> <dir> <type> <options> <dump> <pass>

    cat /etc/hosts
    # Static table lookup for hostnames.
    # See hosts(5) for details.

    [alarm@alarm ~]$ blkid
    /dev/nvme0n1p1: LABEL_FATBOOT="bootfs" LABEL="bootfs" UUID="5A88-04BC" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="b2c58878-01"
    /dev/nvme0n1p2: LABEL="rootfs" UUID="5170097f-f1f6-42d8-a2ff-8938cbdfa7be" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="b2c58878-02"


    Hoping to keep partition sizes etc. identical across drives is hard so it does seem easier to just copy data from one drive to the other.

    dd is your friend

    https://www.howtoforge.com/linux-dd-command-clone-disk-practical-example/ https://thelinuxcode.com/clone-disk-using-dd-linux/


    dd is your friend if you know _exactly_ what you are doing :)



    As ever, the right way is what works for your requirements: sometimes
    people need something straightforward to get them started. Making
    work for yourself at the outset needs to be justified by saving time
    later on, perhaps.

    All the very best, as ever,

    Andy
    (amacater@debian.org)

    --
    Hindi madali ang maging ako


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Tue Dec 3 12:10:01 2024
    Sent: Tuesday, December 03, 2024 at 3:52 AM
    From: "Andrew M.A. Cater" <amacater@einval.com>
    To: debian-user@lists.debian.org
    Subject: Re: From SSD to NVME

    On Tue, Dec 03, 2024 at 02:55:07AM +0100, pocket@homemail.com wrote:


    It might be easier to produce a clean new install and then just rsync data from the SSD drive to the appropriate directories on the NVME.

    No it is better that everything comes over all at one time


    As someone else has put it elsewhere in the thread: new laptop means
    new drivers, potentially moving from legacy MBR to UEFI ... easier in
    many ways to put a clean install of Debian on from new media to start
    with (also wiping out whatever was there before if it came preinstalled
    with Windows or whatever).

    None to little of that is relevant. The "drivers" are part of the kernel, it is not 1995 anymore.
    I go from MBR to GPT to UEFI all the time.

    Is it nice to think that debian still has the microsoft mindset?

    A "clean" install is not really required on a modern linux system.
    Linux is not microsoft windows.

    When people state the above it really just shows they don't understand Linux.



    I'm fairly sure this was brought up just about at the end of last month.

    It depends upon if you created a partition table, partitions and filesystems on the drive.

    I create the drive layout on the drive then rsync the old drive to the new drive.
    Then I fixup the PARTUUID in the /etc/fstab and boot loader.
    If I am using Archlinux or my own custom build os I have a blank /etc/fstab and /etc/hosts

    There's more than one way to do it: if you absolutely know what partition sizes you want, maybe - LVM and one partition is a fairly sensible starting point because partitions will grow and shrink, for example.

    Nonsense, using/building distributions and running Linux since 1995, partitions don't grow and shrink.
    They are static. That is an outdated concept.

    I have "cloned" my last 15 installs from a USB drive to another drive. Starting from an old image on a USB drive that I don't recall how old it is to a new sdcard/HDD/SDD/NVME drive then updated using the package manager and never had an issue. debian is
    the only distribution that I have used that has a problem doing that.

    I have on occasion installed from a package manager (not dpkg/apt) to a systemd nspawn container then copied that to a new drive properly prepared of course. Put the drive into a new machine and boot and done. It just works.

    It is not rocket science nor brain surgery and it is not hard at all to do.

    The only time it seems to be an issue is when using debian.

    Using Archlinux for instance you can (and I have) use an install image years old and simply do an update and you are done. It is all up to date. Rolling releases is how it is done.

    Knowing how Linux boots and works is a help. I don't know every concept behind linux but I do know how to build a system from scratch (building all the packages myself as in cross compiling from AMD64 to ARM) and then get it to boot. Tedious but not
    really difficult/hard.

    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
    drwxr-xr-x 17 root root 3980 Dec 2 10:38 dev
    drwxr-xr-x 52 root root 4096 Dec 2 20:58 etc
    drwxr-xr-x 3 root root 4096 Aug 19 03:43 home
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 lib -> usr/lib
    drwx------ 2 root root 16384 May 15 2024 lost+found
    drwxr-xr-x 2 root root 4096 Sep 14 12:01 mnt
    drwxr-xr-x 2 root root 4096 Apr 7 2024 opt
    dr-xr-xr-x 247 root root 0 Dec 31 1969 proc
    drwxr-x--- 6 root root 4096 Dec 2 13:06 root
    drwxr-xr-x 22 root root 640 Dec 2 10:38 run
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 sbin -> usr/bin
    drwxr-xr-x 36 root alarm 4096 Nov 29 12:42 srv
    dr-xr-xr-x 12 root root 0 Dec 31 1969 sys
    drwxrwxrwt 13 root root 260 Dec 3 00:00 tmp
    drwxr-xr-x 8 root root 4096 Dec 2 20:58 usr
    drwxr-xr-x 13 root root 4096 Nov 30 17:21 var

    Notice sbin is a symlink to /usr/bin


    cat /etc/fstab
    # Static information about the filesystems.
    # See fstab(5) for details.

    # <file system> <dir> <type> <options> <dump> <pass>

    cat /etc/hosts
    # Static table lookup for hostnames.
    # See hosts(5) for details.

    [alarm@alarm ~]$ blkid
    /dev/nvme0n1p1: LABEL_FATBOOT="bootfs" LABEL="bootfs" UUID="5A88-04BC" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="b2c58878-01"
    /dev/nvme0n1p2: LABEL="rootfs" UUID="5170097f-f1f6-42d8-a2ff-8938cbdfa7be" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="b2c58878-02"


    Hoping to keep partition sizes etc. identical across drives is hard so it does seem easier to just copy data from one drive to the other.

    dd is your friend

    https://www.howtoforge.com/linux-dd-command-clone-disk-practical-example/ https://thelinuxcode.com/clone-disk-using-dd-linux/


    dd is your friend if you know _exactly_ what you are doing :)

    No you just need to pay attention and follow directions, no voodoo required.




    As ever, the right way is what works for your requirements: sometimes
    people need something straightforward to get them started. Making
    work for yourself at the outset needs to be justified by saving time
    later on, perhaps.

    Your standard disclaimer.

    A "new" install is a new install. How Linux boots and runs is well known.
    DDG is your friend.



    All the very best, as ever,

    Not really


    Andy
    (amacater@debian.org)


    --
    Hindi madali ang maging ako

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From tomas@tuxteam.de@21:1/5 to pocket@homemail.com on Tue Dec 3 12:40:02 2024
    On Tue, Dec 03, 2024 at 12:01:15PM +0100, pocket@homemail.com wrote:

    [...]

    When people state the above it really just shows they don't understand Linux.

    I'd guess Andrew Cater understands Linux better than you and
    me taken together, but hey. He's been contributing to Debian
    since (at least) late 1990s.

    Disagreeing is OK. Disparaging is... risky.

    Cheers
    --
    t

    -----BEGIN PGP SIGNATURE-----

    iF0EABECAB0WIQRp53liolZD6iXhAoIFyCz1etHaRgUCZ07tLgAKCRAFyCz1etHa RoikAJ9CrAZxwZgU8v40XM4aOKtR8OhYEQCeILnC47CLdeybAaT5QudW9qkSKl8=
    =vHUE
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Greg Wooledge@21:1/5 to pocket@homemail.com on Tue Dec 3 13:20:01 2024
    On Tue, Dec 03, 2024 at 12:01:15 +0100, pocket@homemail.com wrote:
    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
    drwxr-xr-x 17 root root 3980 Dec 2 10:38 dev
    drwxr-xr-x 52 root root 4096 Dec 2 20:58 etc
    drwxr-xr-x 3 root root 4096 Aug 19 03:43 home
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 lib -> usr/lib
    drwx------ 2 root root 16384 May 15 2024 lost+found
    drwxr-xr-x 2 root root 4096 Sep 14 12:01 mnt
    drwxr-xr-x 2 root root 4096 Apr 7 2024 opt
    dr-xr-xr-x 247 root root 0 Dec 31 1969 proc
    drwxr-x--- 6 root root 4096 Dec 2 13:06 root
    drwxr-xr-x 22 root root 640 Dec 2 10:38 run
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 sbin -> usr/bin
    drwxr-xr-x 36 root alarm 4096 Nov 29 12:42 srv
    dr-xr-xr-x 12 root root 0 Dec 31 1969 sys
    drwxrwxrwt 13 root root 260 Dec 3 00:00 tmp
    drwxr-xr-x 8 root root 4096 Dec 2 20:58 usr
    drwxr-xr-x 13 root root 4096 Nov 30 17:21 var

    Notice sbin is a symlink to /usr/bin

    That's not how Debian 12 has it.

    hobbit:~$ ls -ld /sbin /bin
    lrwxrwxrwx 1 root root 7 Feb 17 2024 /bin -> usr/bin/
    lrwxrwxrwx 1 root root 8 Feb 17 2024 /sbin -> usr/sbin/

    If Trixie has done an "sbin merge", it's news to me.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Tue Dec 3 14:00:02 2024
    pocket@homemail.com (12024-12-03):
    Why hasn't debian done so?

    Because polluting the completion namespace with commands useful once in
    a blue moon for administrators is a stupid idea.

    Regards,

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Tue Dec 3 13:40:01 2024
    Sent: Tuesday, December 03, 2024 at 7:15 AM
    From: "Greg Wooledge" <greg@wooledge.org>
    To: debian-user@lists.debian.org
    Subject: Re: From SSD to NVME

    On Tue, Dec 03, 2024 at 12:01:15 +0100, pocket@homemail.com wrote:
    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
    drwxr-xr-x 17 root root 3980 Dec 2 10:38 dev
    drwxr-xr-x 52 root root 4096 Dec 2 20:58 etc
    drwxr-xr-x 3 root root 4096 Aug 19 03:43 home
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 lib -> usr/lib
    drwx------ 2 root root 16384 May 15 2024 lost+found
    drwxr-xr-x 2 root root 4096 Sep 14 12:01 mnt
    drwxr-xr-x 2 root root 4096 Apr 7 2024 opt
    dr-xr-xr-x 247 root root 0 Dec 31 1969 proc
    drwxr-x--- 6 root root 4096 Dec 2 13:06 root
    drwxr-xr-x 22 root root 640 Dec 2 10:38 run
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 sbin -> usr/bin
    drwxr-xr-x 36 root alarm 4096 Nov 29 12:42 srv
    dr-xr-xr-x 12 root root 0 Dec 31 1969 sys
    drwxrwxrwt 13 root root 260 Dec 3 00:00 tmp
    drwxr-xr-x 8 root root 4096 Dec 2 20:58 usr
    drwxr-xr-x 13 root root 4096 Nov 30 17:21 var

    Notice sbin is a symlink to /usr/bin

    That's not how Debian 12 has it.

    hobbit:~$ ls -ld /sbin /bin
    lrwxrwxrwx 1 root root 7 Feb 17 2024 /bin -> usr/bin/
    lrwxrwxrwx 1 root root 8 Feb 17 2024 /sbin -> usr/sbin/

    If Trixie has done an "sbin merge", it's news to me.

    I get it.

    The above was from an Arch linux server, I do the same on my custom distribution/OS.

    Why hasn't debian done so?

    /sbin and /usr/sbin should be sym linked to /usr/bin.
    In fact on a debian system I make it so........

    --
    Hindi madali ang maging ako

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Tue Dec 3 14:10:01 2024
    Sent: Tuesday, December 03, 2024 at 7:50 AM
    From: "Nicolas George" <george@nsup.org>
    To: debian-user@lists.debian.org
    Subject: Re: From SSD to NVME

    pocket@homemail.com (12024-12-03):
    Why hasn't debian done so?

    Because polluting the completion namespace with commands useful once in
    a blue moon for administrators is a stupid idea.

    What namespace would that be, it's just a filesystem.

    How so?

    It doesn't pollute anything.

    Nothing prevents a user from running anything that is in /sbin or /usr/sbin, except permissions.

    I am sure the script kiddies won't find any binaries in /sbin /usr/sbin, as they are off limits to a "normal user"..........................

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Tue Dec 3 14:30:01 2024
    pocket@homemail.com (12024-12-03):
    What namespace would that be

    I just said it: the namespace for completion.

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Tue Dec 3 15:30:01 2024
    Sent: Tuesday, December 03, 2024 at 8:22 AM
    From: "Nicolas George" <george@nsup.org>
    To: debian-user@lists.debian.org
    Subject: Re: From SSD to NVME

    pocket@homemail.com (12024-12-03):
    What namespace would that be

    I just said it: the namespace for completion.

    --
    Nicolas George



    [alarm@alarm ~]$ pacman -Q|grep bash
    bash 5.2.037-1

    dpkg -l|grep bash
    ii bash 5.2.15-2+b7 arm64 GNU Bourne Again SHell

    How did that happen?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Tue Dec 3 15:30:01 2024
    pocket@homemail.com composed on 2024-12-03 12:01 (UTC+0100):

    As someone else has put it elsewhere in the thread: new laptop means
    new drivers, potentially moving from legacy MBR to UEFI ... easier in
    many ways to put a clean install of Debian on from new media to start
    with (also wiping out whatever was there before if it came preinstalled
    with Windows or whatever).

    None to little of that is relevant. The "drivers" are part of the kernel

    Sort of:
    # inxi -Sd
    System:
    Host: ab250 Kernel: 6.1.0-25-amd64 arch: x86_64 bits: 64
    Console: pty pts/0 Distro: Debian GNU/Linux 12 (bookworm)
    Drives:
    Local Storage: total: 476.94 GiB used: 60.56 GiB (12.7%)
    ID-1: /dev/nvme0n1 vendor: Patriot model: M.2 P300 512GB size: 476.94 GiB

    # lsinitramfs /boot/initrd.img-6.1.0-25-amd64 | egrep 'nvme|ata|ahci|piix' usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme-core.ko usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme.ko usr/lib/udev/ata_id
    usr/bin/fatattr
    #
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Tue Dec 3 15:50:01 2024
    Greg Wooledge composed on 2024-12-03 07:15 (UTC-0500):

    On Tue, Dec 03, 2024 at 12:01:15 +0100, pocket wrote:

    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
    drwxr-xr-x 17 root root 3980 Dec 2 10:38 dev
    drwxr-xr-x 52 root root 4096 Dec 2 20:58 etc
    drwxr-xr-x 3 root root 4096 Aug 19 03:43 home
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 lib -> usr/lib
    drwx------ 2 root root 16384 May 15 2024 lost+found
    drwxr-xr-x 2 root root 4096 Sep 14 12:01 mnt
    drwxr-xr-x 2 root root 4096 Apr 7 2024 opt
    dr-xr-xr-x 247 root root 0 Dec 31 1969 proc
    drwxr-x--- 6 root root 4096 Dec 2 13:06 root
    drwxr-xr-x 22 root root 640 Dec 2 10:38 run
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 sbin -> usr/bin
    drwxr-xr-x 36 root alarm 4096 Nov 29 12:42 srv
    dr-xr-xr-x 12 root root 0 Dec 31 1969 sys
    drwxrwxrwt 13 root root 260 Dec 3 00:00 tmp
    drwxr-xr-x 8 root root 4096 Dec 2 20:58 usr
    drwxr-xr-x 13 root root 4096 Nov 30 17:21 var

    Notice sbin is a symlink to /usr/bin

    That's not how Debian 12 has it.

    hobbit:~$ ls -ld /sbin /bin
    lrwxrwxrwx 1 root root 7 Feb 17 2024 /bin -> usr/bin/
    lrwxrwxrwx 1 root root 8 Feb 17 2024 /sbin -> usr/sbin/

    If Trixie has done an "sbin merge", it's news to me.

    Bookworm looks to be one of the last of the Mohicans:
    # grep RETT /etc/os-release
    PRETTY_NAME="Debian GNU/Linux trixie/sid"
    # ls -gGh /
    total 186K
    lrwxrwxrwx 1 7 Oct 16 2022 bin -> usr/bin
    drwxr-xr-x 4 5.0K Dec 2 18:29 boot
    drwxr-xr-x 18 3.8K Dec 3 09:25 dev
    drwxr-xr-x 110 10K Dec 2 18:28 etc
    drwxr-xr-x 14 4.0K Dec 1 20:35 home
    lrwxrwxrwx 1 29 Dec 2 18:28 initrd.img -> boot/initrd.img-6.11.10-amd64 lrwxrwxrwx 1 29 Dec 2 18:28 initrd.img.old -> boot/initrd.img-6.10.12-amd64
    lrwxrwxrwx 1 7 Oct 16 2022 lib -> usr/lib
    lrwxrwxrwx 1 9 Oct 16 2022 lib64 -> usr/lib64
    drwx------ 2 12K Jun 24 2018 lost+found
    drwxr-xr-x 4 1.0K Feb 13 2020 media
    drwxr-xr-x 2 1.0K Jun 24 2018 mnt
    drwxr-xr-x 3 1.0K Jun 24 2018 opt
    dr-xr-xr-x 246 0 Dec 3 04:25 proc
    drwx------ 17 5.0K Dec 3 09:26 root
    drwxr-xr-x 22 680 Dec 3 09:25 run
    lrwxrwxrwx 1 8 Oct 16 2022 sbin -> usr/sbin
    drwxr-xr-x 2 1.0K Jun 24 2018 srv
    dr-xr-xr-x 13 0 Dec 3 04:25 sys
    drwxrwxrwt 9 180 Dec 3 09:26 tmp
    drwxr-xr-x 12 1.0K Sep 21 2023 usr
    drwxr-xr-x 11 1.0K Dec 16 2023 var
    lrwxrwxrwx 1 26 Dec 2 18:28 vmlinuz -> boot/vmlinuz-6.11.10-amd64 lrwxrwxrwx 1 26 Dec 2 18:28 vmlinuz.old -> boot/vmlinuz-6.10.12-amd64
    #

    # grep RETT /etc/os-release
    PRETTY_NAME="Ubuntu 24.04.1 LTS"
    # ls -gGh /
    total 187K
    lrwxrwxrwx 1 7 Dec 8 2023 bin -> usr/bin
    drwxr-xr-x 2 1.0K Feb 26 2024 bin.usr-is-merged
    drwxrwxr-x 4 3.0K Dec 2 22:28 boot
    drwxr-xr-x 18 4.7K Dec 3 09:34 dev
    drwxr-xr-x 125 9.0K Dec 2 22:26 etc
    drwxr-xr-x 14 4.0K Dec 1 20:35 home
    lrwxrwxrwx 1 32 Dec 2 22:26 initrd.img -> boot/initrd.img-6.8.0-49-generic lrwxrwxrwx 1 32 Dec 2 22:26 initrd.img.old -> boot/initrd.img-6.8.0-45-generic
    drwxr-xr-x 2 1.0K Jan 22 2019 kicc
    lrwxrwxrwx 1 7 Dec 8 2023 lib -> usr/lib
    lrwxrwxrwx 1 9 Dec 8 2023 lib64 -> usr/lib64
    drwxr-xr-x 2 1.0K Feb 26 2024 lib.usr-is-merged
    drwx------ 2 12K Jun 25 2018 lost+found
    drwxr-xr-x 2 1.0K Jun 26 2018 media
    drwxr-xr-x 2 1.0K Jun 26 2018 mnt
    drwxr-xr-x 3 1.0K Jun 26 2018 opt
    dr-xr-xr-x 260 0 Dec 3 04:34 proc
    drwxrwxr-x 6 136K Dec 2 22:31 pub
    drwx------ 19 4.0K Dec 3 09:35 root
    drwxr-xr-x 22 740 Dec 3 09:35 run
    lrwxrwxrwx 1 8 Dec 8 2023 sbin -> usr/sbin
    drwxr-xr-x 2 1.0K Feb 2 2024 sbin.usr-is-merged
    drwxr-xr-x 2 1.0K Jun 26 2018 srv
    dr-xr-xr-x 13 0 Dec 3 04:34 sys
    drwxrwxrwt 8 1.0K Dec 3 09:34 tmp
    drwxr-xr-x 12 1.0K Dec 8 2023 usr
    drwxr-xr-x 11 1.0K Mar 20 2024 var
    lrwxrwxrwx 1 29 Dec 2 22:26 vmlinuz -> boot/vmlinuz-6.8.0-49-generic lrwxrwxrwx 1 29 Dec 2 22:26 vmlinuz.old -> boot/vmlinuz-6.8.0-45-generic
    #

    # grep RETT /etc/os-release
    PRETTY_NAME="KDE neon 6.2"
    root@ab250:~# ls -gGh /
    total 187K
    lrwxrwxrwx 1 7 Jun 2 2024 bin -> usr/bin
    drwxr-xr-x 2 1.0K Mar 31 2024 bin.usr-is-merged
    drwxrwxr-x 3 1.0K Oct 10 15:11 boot
    drwxr-xr-x 19 4.7K Dec 3 09:37 dev
    drwxr-xr-x 135 10K Dec 2 19:01 etc
    drwxr-xr-x 14 4.0K Dec 1 20:35 home
    lrwxrwxrwx 1 32 Oct 10 15:09 initrd.img -> boot/initrd.img-6.8.0-45-generic lrwxrwxrwx 1 32 Oct 10 15:09 initrd.img.old -> boot/initrd.img-6.8.0-38-generic
    drwxr-xr-x 2 1.0K Jan 22 2019 kicc
    lrwxrwxrwx 1 7 Jun 2 2024 lib -> usr/lib
    lrwxrwxrwx 1 9 Jun 2 2024 lib32 -> usr/lib32
    lrwxrwxrwx 1 9 Jun 2 2024 lib64 -> usr/lib64
    drwxr-xr-x 2 1.0K Apr 7 2024 lib.usr-is-merged
    lrwxrwxrwx 1 10 Jun 2 2024 libx32 -> usr/libx32
    drwx------ 2 12K Jun 25 2018 lost+found
    drwxr-xr-x 2 1.0K Jun 26 2018 media
    drwxr-xr-x 2 1.0K Jun 26 2018 mnt
    drwxr-xr-x 3 1.0K Jun 26 2018 opt
    dr-xr-xr-x 259 0 Dec 3 04:37 proc
    drwx------ 18 4.0K Dec 3 09:37 root
    drwxr-xr-x 27 880 Dec 3 09:37 run
    lrwxrwxrwx 1 8 Jun 2 2024 sbin -> usr/sbin
    drwxr-xr-x 2 1.0K Aug 20 2022 sbin.usr-is-merged
    drwxr-xr-x 2 1.0K Jun 26 2018 srv
    dr-xr-xr-x 13 0 Dec 3 04:37 sys
    drwxrwxrwt 8 1.0K Dec 3 09:37 tmp
    drwxr-xr-x 14 1.0K Jun 2 2024 usr
    drwxr-xr-x 11 1.0K Jul 22 20:45 var
    lrwxrwxrwx 1 29 Oct 10 15:09 vmlinuz -> boot/vmlinuz-6.8.0-45-generic lrwxrwxrwx 1 29 Oct 10 15:09 vmlinuz.old -> boot/vmlinuz-6.8.0-38-generic drwx------ 2 1.0K Oct 10 17:51 x-large
    #

    # grep RETT /etc/os-release
    PRETTY_NAME="openSUSE Tumbleweed"
    # ls -gGh /
    total 236K
    lrwxrwxrwx 1 7 Sep 25 15:10 bin -> usr/bin
    dr-xr-xr-x 4 16K Dec 2 03:00 boot
    drwxr-xr-x 17 4.5K Dec 3 09:39 dev
    drwxr-xr-x 102 12K Dec 2 03:03 etc
    drwxr-xr-x 14 4.0K Dec 1 20:35 home
    lrwxrwxrwx 1 7 Sep 25 15:10 lib -> usr/lib
    lrwxrwxrwx 1 9 Sep 25 15:10 lib64 -> usr/lib64
    drwx------ 2 16K Jun 20 2018 lost+found
    dr-xr-xr-x 2 4.0K Jun 21 2018 mnt
    dr-xr-xr-x 3 4.0K Jun 9 2021 opt
    dr-xr-xr-x 224 0 Dec 3 04:39 proc
    drwxrwxr-x 6 136K Dec 2 22:31 pub
    drwx------ 21 12K Dec 2 03:18 root
    drwxr-xr-x 28 660 Dec 3 09:39 run
    lrwxrwxrwx 1 8 Sep 25 15:10 sbin -> usr/sbin
    dr-xr-xr-x 3 4.0K Jun 9 2021 srv
    dr-xr-xr-x 13 0 Dec 3 04:39 sys
    drwxrwxrwt 10 200 Dec 3 09:39 tmp
    drwxr-xr-x 13 4.0K Sep 25 15:10 usr
    drwxr-xr-x 10 4.0K Oct 23 20:06 var
    #

    # grep RETT /etc/os-release
    PRETTY_NAME="Fedora Linux 39 (Thirty Nine)"
    # ls -gGh /
    total 202K
    dr-xr-xr-x 2 2.0K Jul 20 2023 afs
    lrwxrwxrwx 1 7 Jul 20 2023 bin -> usr/bin
    dr-xr-xr-x 5 6.0K Oct 30 19:14 boot
    drwxr-xr-x 20 4.4K Dec 3 09:41 dev
    drwxr-xr-x 111 10K Oct 30 19:12 etc
    drwxr-xr-x 14 4.0K Dec 1 20:35 home
    lrwxrwxrwx 1 7 Jul 20 2023 lib -> usr/lib
    lrwxrwxrwx 1 9 Jul 20 2023 lib64 -> usr/lib64
    drwx------ 2 16K Jan 21 2020 lost+found
    drwxr-xr-x 2 2.0K Jul 20 2023 media
    drwxr-xr-x 2 2.0K Jul 20 2023 mnt
    drwxr-xr-x 3 2.0K Jul 20 2023 opt
    dr-xr-xr-x 224 0 Dec 3 04:41 proc
    dr-xr-x--- 18 6.0K Dec 3 09:41 root
    drwxr-xr-x 26 760 Dec 3 09:41 run
    lrwxrwxrwx 1 8 Jul 20 2023 sbin -> usr/sbin
    drwxr-xr-x 2 2.0K Jul 20 2023 srv
    dr-xr-xr-x 13 0 Dec 3 04:41 sys
    drwxrwxrwt 11 220 Dec 3 09:41 tmp
    drwxr-xr-x 12 2.0K Sep 20 2023 usr
    drwxr-xr-x 18 2.0K Sep 20 2023 var
    #
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Tue Dec 3 16:00:01 2024
    pocket@homemail.com (12024-12-03):
    [alarm@alarm ~]$ pacman -Q|grep bash
    bash 5.2.037-1

    dpkg -l|grep bash
    ii bash 5.2.15-2+b7 arm64 GNU Bourne Again SHell

    How did that happen?

    I do not know, but unless you start making sense I will just stop
    answering to you.

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Tue Dec 3 15:50:01 2024
    Sent: Tuesday, December 03, 2024 at 9:24 AM
    From: "Felix Miata" <mrmazda@stanis.net>
    To: pocket@homemail.com, debian-user@lists.debian.org
    Subject: Re: From SSD to NVME

    pocket@homemail.com composed on 2024-12-03 12:01 (UTC+0100):

    As someone else has put it elsewhere in the thread: new laptop means
    new drivers, potentially moving from legacy MBR to UEFI ... easier in
    many ways to put a clean install of Debian on from new media to start
    with (also wiping out whatever was there before if it came preinstalled
    with Windows or whatever).

    None to little of that is relevant. The "drivers" are part of the kernel

    Sort of:
    # inxi -Sd
    System:
    Host: ab250 Kernel: 6.1.0-25-amd64 arch: x86_64 bits: 64
    Console: pty pts/0 Distro: Debian GNU/Linux 12 (bookworm)
    Drives:
    Local Storage: total: 476.94 GiB used: 60.56 GiB (12.7%)
    ID-1: /dev/nvme0n1 vendor: Patriot model: M.2 P300 512GB size: 476.94 GiB …
    # lsinitramfs /boot/initrd.img-6.1.0-25-amd64 | egrep 'nvme|ata|ahci|piix' usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme-core.ko usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme.ko usr/lib/udev/ata_id
    usr/bin/fatattr
    #
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata


    pocket@pocket:~ $ lsinitramfs /boot/initrd.img-6.6.62 | grep -E 'nvme|ata|ahci|piix'
    usr/lib/modules/6.6.62/kernel/drivers/ata usr/lib/modules/6.6.62/kernel/drivers/ata/ahci.ko usr/lib/modules/6.6.62/kernel/drivers/ata/libahci.ko usr/lib/modules/6.6.62/kernel/drivers/ata/libata.ko usr/lib/modules/6.6.62/kernel/drivers/ata/sata_mv.ko usr/lib/modules/6.6.62/kernel/drivers/usb/storage/ums-datafab.ko usr/lib/udev/ata_id
    usr/bin/fatattr

    root@pockey:~# nvme list
    Node Generic SN Model Namespace Usage Format FW Rev
    --------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
    /dev/nvme0n1 /dev/ng0n1 A7SEB339046L5H Corsair MP600 CORE MINI 1 1.00 TB / 1.00 TB 512 B + 0 B ELFMC1.0

    Oh my!

    The root system is on an nvme drive...........

    On a more somber note:
    egrep is depreciated

    egrep is now grep -E

    https://itsfoss.com/deprecated-linux-commands/

    https://www.redhat.com/en/blog/deprecated-linux-command-replacements

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Tue Dec 3 16:10:01 2024
    pocket composed on 2024-12-03 09:40 (UTC-0500):

    From: "Felix Miata"

    pocket composed on 2024-12-03 12:01 (UTC+0100):

    The "drivers" are part of the kernel

    Sort of:
    # inxi -Sd
    System:
    Host: ab250 Kernel: 6.1.0-25-amd64 arch: x86_64 bits: 64
    Console: pty pts/0 Distro: Debian GNU/Linux 12 (bookworm)
    Drives:
    Local Storage: total: 476.94 GiB used: 60.56 GiB (12.7%)
    ID-1: /dev/nvme0n1 vendor: Patriot model: M.2 P300 512GB size: 476.94 GiB >> …
    # lsinitramfs /boot/initrd.img-6.1.0-25-amd64 | egrep 'nvme|ata|ahci|piix' >> usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme
    usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host
    usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme-core.ko
    usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme.ko
    usr/lib/udev/ata_id
    usr/bin/fatattr
    #

    Note absence of *ata, ahci, piix & RAID. It's a system with NVME only. Without NVME modules, there is no booting the system using a stock Debian kernel.

    pocket@pocket:~ $ lsinitramfs /boot/initrd.img-6.6.62 | grep -E 'nvme|ata|ahci|piix'
    usr/lib/modules/6.6.62/kernel/drivers/ata usr/lib/modules/6.6.62/kernel/drivers/ata/ahci.ko usr/lib/modules/6.6.62/kernel/drivers/ata/libahci.ko usr/lib/modules/6.6.62/kernel/drivers/ata/libata.ko usr/lib/modules/6.6.62/kernel/drivers/ata/sata_mv.ko usr/lib/modules/6.6.62/kernel/drivers/usb/storage/ums-datafab.ko usr/lib/udev/ata_id
    usr/bin/fatattr

    root@pockey:~# nvme list
    Node Generic SN Model Namespace Usage Format FW Rev
    --------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
    /dev/nvme0n1 /dev/ng0n1 A7SEB339046L5H Corsair MP600 CORE MINI 1 1.00 TB / 1.00 TB 512 B + 0 B ELFMC1.0

    Oh my!

    The root system is on an nvme drive...........

    On a more somber note:
    egrep is depreciated

    egrep is now grep -E

    https://itsfoss.com/deprecated-linux-commands/

    https://www.redhat.com/en/blog/deprecated-linux-command-replacements

    IMO, whoever made that determination is a moron who knows nothing of touch typing[1]: two characters, one a top row pinkie, plus a shift to type, instead of
    a single lower case character.

    I still use egrep because it still works, more than two years after first seeing
    that deprecation message.

    [1] probably the same moron who lead to the gross overusage of underscores, a (nearly always unnecessary) shifted pinkie.
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Tue Dec 3 16:30:01 2024
    Sent: Tuesday, December 03, 2024 at 10:07 AM
    From: "Felix Miata" <mrmazda@stanis.net>
    To: debian-user@lists.debian.org, pocket@columbus.rr.com
    Subject: Re: From SSD to NVME

    pocket composed on 2024-12-03 09:40 (UTC-0500):

    From: "Felix Miata"

    pocket composed on 2024-12-03 12:01 (UTC+0100):

    The "drivers" are part of the kernel

    Sort of:
    # inxi -Sd
    System:
    Host: ab250 Kernel: 6.1.0-25-amd64 arch: x86_64 bits: 64
    Console: pty pts/0 Distro: Debian GNU/Linux 12 (bookworm)
    Drives:
    Local Storage: total: 476.94 GiB used: 60.56 GiB (12.7%)
    ID-1: /dev/nvme0n1 vendor: Patriot model: M.2 P300 512GB size: 476.94 GiB

    # lsinitramfs /boot/initrd.img-6.1.0-25-amd64 | egrep 'nvme|ata|ahci|piix' >> usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme
    usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host
    usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme-core.ko
    usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme.ko
    usr/lib/udev/ata_id
    usr/bin/fatattr
    #


    Note absence of *ata, ahci, piix & RAID. It's a system with NVME only. Without
    NVME modules, there is no booting the system using a stock Debian kernel.

    The system I am running this on right now has only NVME only.
    Note absence of nvme kernel modules and it boots just fine.

    grep RETT /etc/os-release
    PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"

    It has a stock kernel as I have not built a custom kernel

    fdisk -l .......
    Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
    Disk model: Corsair MP600 CORE MINI
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0xb2c58878

    Device Boot Start End Sectors Size Id Type
    /dev/nvme0n1p1 8192 1056767 1048576 512M c W95 FAT32 (LBA) /dev/nvme0n1p2 1056768 1953525167 1952468400 931G 83 Linux

    I don't know why it won't work for you


    pocket@pocket:~ $ lsinitramfs /boot/initrd.img-6.6.62 | grep -E 'nvme|ata|ahci|piix'
    usr/lib/modules/6.6.62/kernel/drivers/ata usr/lib/modules/6.6.62/kernel/drivers/ata/ahci.ko usr/lib/modules/6.6.62/kernel/drivers/ata/libahci.ko usr/lib/modules/6.6.62/kernel/drivers/ata/libata.ko usr/lib/modules/6.6.62/kernel/drivers/ata/sata_mv.ko usr/lib/modules/6.6.62/kernel/drivers/usb/storage/ums-datafab.ko usr/lib/udev/ata_id
    usr/bin/fatattr


    [putolin]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Greg Wooledge@21:1/5 to Felix Miata on Tue Dec 3 20:40:01 2024
    On Tue, Dec 03, 2024 at 09:44:32 -0500, Felix Miata wrote:
    Greg Wooledge composed on 2024-12-03 07:15 (UTC-0500):
    On Tue, Dec 03, 2024 at 12:01:15 +0100, pocket wrote:
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 sbin -> usr/bin

    That's not how Debian 12 has it.

    hobbit:~$ ls -ld /sbin /bin
    lrwxrwxrwx 1 root root 7 Feb 17 2024 /bin -> usr/bin/
    lrwxrwxrwx 1 root root 8 Feb 17 2024 /sbin -> usr/sbin/

    If Trixie has done an "sbin merge", it's news to me.

    Note that pocket's system has /sbin pointing to usr/bin NOT to usr/sbin.

    Bookworm looks to be one of the last of the Mohicans:

    I'm not sure what you mean by that.

    PRETTY_NAME="Debian GNU/Linux trixie/sid"
    lrwxrwxrwx 1 8 Oct 16 2022 sbin -> usr/sbin

    PRETTY_NAME="Ubuntu 24.04.1 LTS"
    lrwxrwxrwx 1 8 Dec 8 2023 sbin -> usr/sbin
    drwxr-xr-x 2 1.0K Feb 2 2024 sbin.usr-is-merged

    PRETTY_NAME="KDE neon 6.2"
    lrwxrwxrwx 1 8 Jun 2 2024 sbin -> usr/sbin
    drwxr-xr-x 2 1.0K Aug 20 2022 sbin.usr-is-merged

    PRETTY_NAME="openSUSE Tumbleweed"
    lrwxrwxrwx 1 8 Sep 25 15:10 sbin -> usr/sbin

    PRETTY_NAME="Fedora Linux 39 (Thirty Nine)"
    lrwxrwxrwx 1 8 Jul 20 2023 sbin -> usr/sbin

    All of these systems have sbin pointing to usr/sbin (the same as
    bookworm), NOT to usr/bin the way pocket's system does.

    pocket's system is the outlier here. It's the only one where there
    isn't a separate usr/sbin.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to Greg Wooledge on Tue Dec 3 20:50:01 2024
    Hi,

    On Tue, Dec 03, 2024 at 02:31:14PM -0500, Greg Wooledge wrote:
    pocket's system is the outlier here. It's the only one where there
    isn't a separate usr/sbin.

    For some reason pocket keeps telling us on a Debian list things about
    their Arch Linux system (actually).

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Tue Dec 3 21:40:01 2024
    Andy Smith composed on 2024-12-03 19:48 (UTC):

    On Tue, Dec 03, 2024 at 14:31:14 -0500, Greg Wooledge wrote:

    pocket's system is the outlier here. It's the only one where there
    isn't a separate usr/sbin.

    For some reason pocket keeps telling us on a Debian list things about
    their Arch Linux system (actually).

    I've been trying to do too many different things at once today. I missed that (and
    more):

    pocket composed on 2024-12-03 12:01 (UTC+0100):
    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot


    What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Steve McIntyre@21:1/5 to pocket@homemail.com on Tue Dec 3 22:20:01 2024
    pocket@homemail.com wrote:
    "Andrew M.A. Cater" <amacater@einval.com> wrote:

    As someone else has put it elsewhere in the thread: new laptop means
    new drivers, potentially moving from legacy MBR to UEFI ... easier in
    many ways to put a clean install of Debian on from new media to start
    with (also wiping out whatever was there before if it came preinstalled
    with Windows or whatever).

    None to little of that is relevant. The "drivers" are part of the kernel, it is
    not 1995 anymore.
    I go from MBR to GPT to UEFI all the time.

    Is it nice to think that debian still has the microsoft mindset?

    A "clean" install is not really required on a modern linux system.
    Linux is not microsoft windows.

    No, it's not. But there are several ways to go here. For a
    non-techinical user it's likely to be easier for them to understand a
    new installation and copying data. You or I might move data around
    systems with confidence, but not everybody is in the same boat.

    When people state the above it really just shows they don't understand Linux.

    There's no need to be rude here. :-(

    I'm fairly sure this was brought up just about at the end of last month. >> >
    It depends upon if you created a partition table, partitions and filesystems
    on the drive.

    I create the drive layout on the drive then rsync the old drive to the new drive.
    Then I fixup the PARTUUID in the /etc/fstab and boot loader.
    If I am using Archlinux or my own custom build os I have a blank /etc/fstab
    and /etc/hosts

    There's more than one way to do it: if you absolutely know what partition
    sizes you want, maybe - LVM and one partition is a fairly sensible starting >> point because partitions will grow and shrink, for example.

    Nonsense, using/building distributions and running Linux since 1995, partitions
    don't grow and shrink.

    But filesystems and their storage needs may, however.

    <irrelevant stuff snipped>

    --
    Steve McIntyre, Cambridge, UK. steve@einval.com Can't keep my eyes from the circling sky,
    Tongue-tied & twisted, Just an earth-bound misfit, I...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Tue Dec 3 22:20:01 2024
    Sent: Tuesday, December 03, 2024 at 3:30 PM
    From: "Felix Miata" <mrmazda@stanis.net>
    To: debian-user@lists.debian.org
    Subject: Re: From SSD to NVME

    Andy Smith composed on 2024-12-03 19:48 (UTC):

    On Tue, Dec 03, 2024 at 14:31:14 -0500, Greg Wooledge wrote:

    pocket's system is the outlier here. It's the only one where there
    isn't a separate usr/sbin.

    For some reason pocket keeps telling us on a Debian list things about
    their Arch Linux system (actually).

    I've been trying to do too many different things at once today. I missed that (and
    more):

    pocket composed on 2024-12-03 12:01 (UTC+0100):
    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot


    What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?

    Mine for one.......

    The install process allows one to setup the disk layout as you like or did I miss something?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Greg Wooledge@21:1/5 to pocket@homemail.com on Tue Dec 3 22:30:01 2024
    On Tue, Dec 03, 2024 at 22:13:36 +0100, pocket@homemail.com wrote:
    From: "Felix Miata" <mrmazda@stanis.net>
    What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?

    Mine for one.......

    Which version of Debian was that, when you installed?

    Or was it Arch?

    The install process allows one to setup the disk layout as you like or did I miss something?

    Yes, but I think the *implication* was that this was something the
    installer had done, either by default, or without extensive tinkering.

    In my experience, Debian does not create a FAT file system for anything
    except the EFI partition. At least, not by default.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Tue Dec 3 22:40:01 2024
    pocket composed on 2024-12-03 22:13 (UTC+0100):

    pocket composed on 2024-12-03 12:01 (UTC+0100):

    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot


    What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?

    Mine for one.......

    Keeping your bootloader of choice a secret?

    The install process allows one to setup the disk layout as you like or did I miss something?

    It's yours to do as you please. I'm just not used to seeing an absence of symlinks
    in /boot/. Etch was the first place I ever found none. All my own Debians have some to facilitate multibooting using only one enabled bootloader per PC. AFAIK,
    one can't put symlinks in a FAT /boot/, or on FAT anywhere else.
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Greg Wooledge@21:1/5 to pocket@homemail.com on Tue Dec 3 23:10:01 2024
    On Tue, Dec 03, 2024 at 22:50:42 +0100, pocket@homemail.com wrote:
    Doesn't the manual/book suggest that you can create the partition layout and filesystem as you would like?

    Why all the double-speak and vagueness?

    Did you manually create a FAT file system, and tell the installer to
    mount that as /boot? If that's what you did, fine, so be it, but why
    are you acting like it's something *Debian* did? This tangent of this
    thread has gone on far too long as everying keeps trying to guess what
    you're actually saying.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Tue Dec 3 23:00:01 2024
    Sent: Tuesday, December 03, 2024 at 4:27 PM
    From: "Greg Wooledge" <greg@wooledge.org>
    To: debian-user@lists.debian.org
    Subject: Re: From SSD to NVME

    On Tue, Dec 03, 2024 at 22:13:36 +0100, pocket@homemail.com wrote:
    From: "Felix Miata" <mrmazda@stanis.net>
    What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?

    Mine for one.......

    Which version of Debian was that, when you installed?

    Or was it Arch?

    grep RETT /etc/os-release
    PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"


    The install process allows one to setup the disk layout as you like or did I miss something?

    Yes, but I think the *implication* was that this was something the
    installer had done, either by default, or without extensive tinkering.

    In my experience, Debian does not create a FAT file system for anything except the EFI partition. At least, not by default.

    Doesn't the manual/book suggest that you can create the partition layout and filesystem as you would like?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Tue Dec 3 23:20:01 2024
    Sent: Tuesday, December 03, 2024 at 5:07 PM
    From: "Greg Wooledge" <greg@wooledge.org>
    To: debian-user@lists.debian.org
    Subject: Re: From SSD to NVME

    On Tue, Dec 03, 2024 at 22:50:42 +0100, pocket@homemail.com wrote:
    Doesn't the manual/book suggest that you can create the partition layout and filesystem as you would like?

    Why all the double-speak and vagueness?

    Did you manually create a FAT file system, and tell the installer to
    mount that as /boot? If that's what you did, fine, so be it, but why
    are you acting like it's something *Debian* did? This tangent of this
    thread has gone on far too long as everying keeps trying to guess what
    you're actually saying.


    If I created the partition arrangement and filesystems then then installer continued isn't that allowing/using the installer to do its work? That part is something debian did, well I think it is/was.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Tue Dec 3 23:30:01 2024
    Sent: Tuesday, December 03, 2024 at 5:10 PM
    From: "Felix Miata" <mrmazda@stanis.net>
    To: debian-user@lists.debian.org
    Subject: Re: From SSD to NVME

    pocket composed on 2024-12-03 22:53 (UTC+0100):

    From: "Felix Miata" <mrmazda@stanis.net>

    pocket composed on 2024-12-03 22:13 (UTC+0100):

    pocket composed on 2024-12-03 12:01 (UTC+0100):

    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot


    What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?

    Mine for one.......

    Keeping your bootloader of choice a secret?

    Nope, a bootloader is a bootloader is a bootloader

    Except one that doesn't load any bootloader files, kernel or initrd from /boot/,
    which is what one or more others than Grub* are reputedly doing.

    I miss your point, the boot loader loads the kernel then control passes to systemd/init/bash

    https://www.baeldung.com/linux/boot-process https://en.wikipedia.org/wiki/Booting_process_of_Linux https://www.freecodecamp.org/news/the-linux-booting-process-6-steps-described-in-detail/

    From one of the links

    #boot=/dev/sda
    default=0
    timeout=5
    splashimage=(hd0,0)/boot/grub/splash.xpm.gz
    hiddenmenu
    title CentOS (2.6.18-194.el5PAE)
    root (hd0,0)
    kernel /boot/vmlinuz-2.6.18-194.el5PAE ro root=LABEL=/
    initrd /boot/initrd-2.6.18-194.el5PAE.img

    So in this example the kernel is loaded from /boot

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Tue Dec 3 23:20:01 2024
    pocket composed on 2024-12-03 22:53 (UTC+0100):

    From: "Felix Miata" <mrmazda@stanis.net>

    pocket composed on 2024-12-03 22:13 (UTC+0100):

    pocket composed on 2024-12-03 12:01 (UTC+0100):

    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot


    What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?

    Mine for one.......

    Keeping your bootloader of choice a secret?

    Nope, a bootloader is a bootloader is a bootloader

    Except one that doesn't load any bootloader files, kernel or initrd from /boot/,
    which is what one or more others than Grub* are reputedly doing.
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Tue Dec 3 23:50:02 2024
    pocket composed on 2024-12-03 23:26 (UTC+0100):

    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot


    What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?

    Mine for one.......

    Keeping your bootloader of choice a secret?

    Nope, a bootloader is a bootloader is a bootloader

    Except one that doesn't load any bootloader files, kernel or initrd from /boot/,
    which is what one or more others than Grub* are reputedly doing.

    I miss your point, the boot loader loads the kernel then control passes to systemd/init/bash

    It used to be that bootloaders always found the kernels and initrds they were to
    load in /boot/. That has been reported to be no longer the case. My interest was
    in having anyone report experience with such a bootloader (along with its name) which does not load kernels and/or initrds from /boot/. I have to wonder what, if
    anything, such bootloaders use /boot/ for, and if not bootloader files, kernels and initrds, where they are expecting to get them from. Given the insecurity of FAT, I hope I don't live long enough to be forced to keep my kernels or initrds on
    such.
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Wed Dec 4 05:20:01 2024
    Timothy M Butterworth composed on 2024-12-03 20:36 (UTC-0500):

    pocket composed on 2024-12-03 12:01 (UTC+0100):
    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot


    The rest of what the above was clipped from is in: https://lists.debian.org/debian-user/2024/12/msg00120.html

    What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot
    configuration?

    /boot/efi is a fat partition. It has to be fat so the UEFI can read the files. Usually /boot is an EXT partition.

    /boot/efi/ is where the ESP normally goes, not /boot/, at least, not when using Grub2 EFI, as opposed to one of those newfangled bootloaders (e.g. systemd-boot)
    that I have yet to see live in person. That 'ls -l /' listing is pocket's root directory showing Dec 31 1969. That means there's a FAT filesystem mounted on /boot/. He hasn't shown us what if anything is mounted on on /boot/efi/.

    What I expect to see with Grub2 EFI is what I see here:
    # ls -gGd /boot/
    dr-xr-xr-x 4 10240 Dec 3 11:57 /boot/ # typical mountpoint EXT4 mounted
    # ls -gGd /boot/efi/
    drwxr-xr-x 4 4096 Dec 31 1969 /boot/efi/ # typical mountpoint FAT mounted
    # mount | grep boot
    /dev/sda1 on /boot/efi type vfat…
    #
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Wed Dec 4 13:10:01 2024
    Sent: Tuesday, December 03, 2024 at 11:18 PM
    From: "Felix Miata" <mrmazda@stanis.net>
    To: debian-user@lists.debian.org, "Timothy M Butterworth" <timothy.m.butterworth@gmail.com>
    Subject: Re: From SSD to NVME

    Timothy M Butterworth composed on 2024-12-03 20:36 (UTC-0500):

    pocket composed on 2024-12-03 12:01 (UTC+0100):
    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot


    The rest of what the above was clipped from is in: https://lists.debian.org/debian-user/2024/12/msg00120.html

    What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot
    configuration?

    /boot/efi is a fat partition. It has to be fat so the UEFI can read the files. Usually /boot is an EXT partition.

    /boot/efi/ is where the ESP normally goes, not /boot/, at least, not when using
    Grub2 EFI, as opposed to one of those newfangled bootloaders (e.g. systemd-boot)
    that I have yet to see live in person. That 'ls -l /' listing is pocket's root
    directory showing Dec 31 1969. That means there's a FAT filesystem mounted on /boot/. He hasn't shown us what if anything is mounted on on /boot/efi/.

    I don't have a partition to mount at /boot/efi
    nvme drive with a msdos mbr two partitions one vfat and one ext4

    Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
    Disk model: Corsair MP600 CORE MINI
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0xb2c58878

    Device Boot Start End Sectors Size Id Type
    /dev/nvme0n1p1 8192 1056767 1048576 512M c W95 FAT32 (LBA) /dev/nvme0n1p2 1056768 1953525167 1952468400 931G 83 Linux


    What I expect to see with Grub2 EFI is what I see here:
    # ls -gGd /boot/
    dr-xr-xr-x 4 10240 Dec 3 11:57 /boot/ # typical mountpoint EXT4 mounted
    # ls -gGd /boot/efi/
    drwxr-xr-x 4 4096 Dec 31 1969 /boot/efi/ # typical mountpoint FAT mounted
    # mount | grep boot
    /dev/sda1 on /boot/efi type vfat…
    #

    mount
    /dev/nvme0n1p2 on / type ext4 (rw,noatime)
    /dev/nvme0n1p1 on /boot/ type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro)

    That is all there is folks just two partitions on a nvme drive

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Joe@21:1/5 to pocket@homemail.com on Wed Dec 4 16:00:01 2024
    On Wed, 4 Dec 2024 13:00:21 +0100
    pocket@homemail.com wrote:

    Sent: Tuesday, December 03, 2024 at 11:18 PM
    From: "Felix Miata" <mrmazda@stanis.net>
    To: debian-user@lists.debian.org, "Timothy M Butterworth" <timothy.m.butterworth@gmail.com> Subject: Re: From SSD to NVME

    Timothy M Butterworth composed on 2024-12-03 20:36 (UTC-0500):

    pocket composed on 2024-12-03 12:01 (UTC+0100):
    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot


    The rest of what the above was clipped from is in: https://lists.debian.org/debian-user/2024/12/msg00120.html

    What Debian puts a FAT filesystem on /boot/? Is that a
    systemd-boot configuration?

    /boot/efi is a fat partition. It has to be fat so the UEFI can
    read the files. Usually /boot is an EXT partition.

    /boot/efi/ is where the ESP normally goes, not /boot/, at least,
    not when using Grub2 EFI, as opposed to one of those newfangled
    bootloaders (e.g. systemd-boot) that I have yet to see live in
    person. That 'ls -l /' listing is pocket's root directory showing
    Dec 31 1969. That means there's a FAT filesystem mounted on /boot/.
    He hasn't shown us what if anything is mounted on on /boot/efi/.

    I don't have a partition to mount at /boot/efi
    nvme drive with a msdos mbr two partitions one vfat and one ext4

    Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
    Disk model: Corsair MP600 CORE MINI
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0xb2c58878

    Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 8192 1056767 1048576 512M c W95 FAT32
    (LBA) /dev/nvme0n1p2 1056768 1953525167 1952468400 931G 83 Linux


    What I expect to see with Grub2 EFI is what I see here:
    # ls -gGd /boot/
    dr-xr-xr-x 4 10240 Dec 3 11:57 /boot/ # typical
    mountpoint EXT4 mounted # ls -gGd /boot/efi/
    drwxr-xr-x 4 4096 Dec 31 1969 /boot/efi/ # typical
    mountpoint FAT mounted # mount | grep boot
    /dev/sda1 on /boot/efi type vfat…
    #

    mount
    /dev/nvme0n1p2 on / type ext4 (rw,noatime)
    /dev/nvme0n1p1 on /boot/ type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro)

    That is all there is folks just two partitions on a nvme drive


    The EFI partition (i.e. partition mounted as /boot/efi or the partition containing /boot, which contains /boot/efi) must have some variety of
    FAT filesystem, according to the EFI spec. Windows will normally use
    ntfs and Debian by default ext4, and a FAT partition has no other real
    use now than for EFI. It may be convenient to put the whole of /boot on
    FAT, but Debian will normally leave /boot in the main / partition, and
    just use FAT for /boot/efi.

    --
    Joe

    --
    Joe

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Wed Dec 4 19:00:01 2024
    Sent: Wednesday, December 04, 2024 at 9:59 AM
    From: "Joe" <joe@jretrading.com>
    To: debian-user@lists.debian.org
    Subject: Re: From SSD to NVME

    On Wed, 4 Dec 2024 13:00:21 +0100
    pocket@homemail.com wrote:

    Sent: Tuesday, December 03, 2024 at 11:18 PM
    From: "Felix Miata" <mrmazda@stanis.net>
    To: debian-user@lists.debian.org, "Timothy M Butterworth" <timothy.m.butterworth@gmail.com> Subject: Re: From SSD to NVME

    Timothy M Butterworth composed on 2024-12-03 20:36 (UTC-0500):

    pocket composed on 2024-12-03 12:01 (UTC+0100):
    [alarm@alarm ~]$ ls -l /
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
    drwxr-xr-x 3 root root 4096 Dec 31 1969 boot


    The rest of what the above was clipped from is in: https://lists.debian.org/debian-user/2024/12/msg00120.html

    What Debian puts a FAT filesystem on /boot/? Is that a
    systemd-boot configuration?

    /boot/efi is a fat partition. It has to be fat so the UEFI can
    read the files. Usually /boot is an EXT partition.

    /boot/efi/ is where the ESP normally goes, not /boot/, at least,
    not when using Grub2 EFI, as opposed to one of those newfangled bootloaders (e.g. systemd-boot) that I have yet to see live in
    person. That 'ls -l /' listing is pocket's root directory showing
    Dec 31 1969. That means there's a FAT filesystem mounted on /boot/.
    He hasn't shown us what if anything is mounted on on /boot/efi/.

    I don't have a partition to mount at /boot/efi
    nvme drive with a msdos mbr two partitions one vfat and one ext4

    Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
    Disk model: Corsair MP600 CORE MINI
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0xb2c58878

    Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 8192 1056767 1048576 512M c W95 FAT32
    (LBA) /dev/nvme0n1p2 1056768 1953525167 1952468400 931G 83 Linux


    What I expect to see with Grub2 EFI is what I see here:
    # ls -gGd /boot/
    dr-xr-xr-x 4 10240 Dec 3 11:57 /boot/ # typical
    mountpoint EXT4 mounted # ls -gGd /boot/efi/
    drwxr-xr-x 4 4096 Dec 31 1969 /boot/efi/ # typical
    mountpoint FAT mounted # mount | grep boot
    /dev/sda1 on /boot/efi type vfat…
    #


    mount
    /dev/nvme0n1p2 on / type ext4 (rw,noatime)
    /dev/nvme0n1p1 on /boot/ type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro)

    That is all there is folks just two partitions on a nvme drive


    The EFI partition (i.e. partition mounted as /boot/efi or the partition containing /boot, which contains /boot/efi) must have some variety of
    FAT filesystem, according to the EFI spec. Windows will normally use
    ntfs and Debian by default ext4, and a FAT partition has no other real
    use now than for EFI. It may be convenient to put the whole of /boot on
    FAT, but Debian will normally leave /boot in the main / partition, and
    just use FAT for /boot/efi.


    I don't have an efi partition, as shown above MSDOS MBR and two partitions nothing else
    One VFAT partition mounted at /boot.
    One ext4 partition mounted at /

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Stone@21:1/5 to pocket@homemail.com on Thu Dec 5 00:00:01 2024
    On Tue, Dec 03, 2024 at 04:27:37PM +0100, pocket@homemail.com wrote:
    The system I am running this on right now has only NVME only.
    Note absence of nvme kernel modules and it boots just fine.

    grep RETT /etc/os-release
    PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"

    It has a stock kernel as I have not built a custom kernel

    That's not actually true, since 6.6.62 isn't a bookworm kernel version.

    pocket@pocket:~ $ lsinitramfs /boot/initrd.img-6.6.62 | grep -E 'nvme|ata|ahci|piix'
    usr/lib/modules/6.6.62/kernel/drivers/ata
    usr/lib/modules/6.6.62/kernel/drivers/ata/ahci.ko
    usr/lib/modules/6.6.62/kernel/drivers/ata/libahci.ko
    usr/lib/modules/6.6.62/kernel/drivers/ata/libata.ko
    usr/lib/modules/6.6.62/kernel/drivers/ata/sata_mv.ko
    usr/lib/modules/6.6.62/kernel/drivers/usb/storage/ums-datafab.ko
    usr/lib/udev/ata_id
    usr/bin/fatattr

    Presumably whoever compiled 6.6.62 built the nvme driver in. On an
    actual stock kernel the nvme driver would be compiled as a loadable
    module.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Stone@21:1/5 to Felix Miata on Thu Dec 5 00:20:01 2024
    On Mon, Dec 02, 2024 at 01:41:18PM -0500, Felix Miata wrote:
    You very likely would need to add drivers to your initrds first, else have to >rescue boot to rebuild after:

    This is probably the result of setting MODULES=dep in
    /etc/initramfs.conf. When changing hardware I'd recommend changing that
    to MODULES=most and then running "update-initramfs -k all -u" to
    regenerate the initramfs with the additional modules. It is possible to
    fix this with a rescue image if one forgets.

    On Mon, Dec 02, 2024 at 03:41:43PM -0300, Bruno Schneider wrote:
    On a side note, last time I tried to install Debian on NVME, it
    wouldn't even find the storage device. I hope this has improved since
    then.

    I've installed debian on a lot of machines with a lot of NVMe devices,
    and never had an issue. The hardware is pretty standardized, and the
    only thing I can think of which might cause an issue would be something
    like an HMB drive with an older linux that predates support for HMB, or
    a PCIe topology problem (unlikely in consumer hardware). In general I
    would expect normal NVMe to just work. I can think of additional failure
    modes causing inability to boot on older hardware, but the kernel should
    still see the drive.

    On Mon, Dec 02, 2024 at 08:14:35PM +0100, Hans wrote:
    But I never used NVME drives before and know (shame on me!) not much about it. >If NVME are only super fast SSD's, then it will be easy, but if NVME are a >complete alien hardware, then I might come in trouble (Nothing, that can not >be fixed!).

    Apart from the need for the nvme driver to be in the initrd (just as
    you'd need an ata, scsi, etc module for those devices) it should be
    possible to migrate fairly easily. NVMe works just like SATA SSD from
    the partition level up (i.e., the stuff you'd dd). One somewhat
    different thing is the concept of NVMe namespaces: your drive will be /dev/nvme0, but you'll probably be using /dev/nvme0n1 except for device management. Partitions then look like /dev/nvme0n1p1. It's unlikely that
    you'd be creating/using additional namespaces apart from the first
    (default) one.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Greg Wooledge on Thu Dec 5 01:10:01 2024
    On 12/3/24 14:32, Greg Wooledge wrote:
    On Tue, Dec 03, 2024 at 09:44:32 -0500, Felix Miata wrote:
    Greg Wooledge composed on 2024-12-03 07:15 (UTC-0500):
    On Tue, Dec 03, 2024 at 12:01:15 +0100, pocket wrote:
    lrwxrwxrwx 1 root root 7 Nov 25 19:15 sbin -> usr/bin

    That's not how Debian 12 has it.

    hobbit:~$ ls -ld /sbin /bin
    lrwxrwxrwx 1 root root 7 Feb 17 2024 /bin -> usr/bin/
    lrwxrwxrwx 1 root root 8 Feb 17 2024 /sbin -> usr/sbin/

    If Trixie has done an "sbin merge", it's news to me.

    Note that pocket's system has /sbin pointing to usr/bin NOT to usr/sbin.

    Bookworm looks to be one of the last of the Mohicans:

    I'm not sure what you mean by that.

    That, Greg, refers to a less than pleasant time in 'merican history when
    the only good American indian was a dead one. The Mohicans were a
    smaller tribe in the northeastern US. I am not proud of that period in
    our history. But it was 100+ years before I was born. And the battle
    continues in some circles yet. Read "by the fire we carry" by Rebecca
    Nagle for the latest news on that. All 5 tribes moved to Oklahoma won in SCOTUS, making 49% of Oklahoma into reservation and autonomous land, but
    at last check, the Oki guvner refuses to honor it.

    PRETTY_NAME="Debian GNU/Linux trixie/sid"
    lrwxrwxrwx 1 8 Oct 16 2022 sbin -> usr/sbin

    PRETTY_NAME="Ubuntu 24.04.1 LTS"
    lrwxrwxrwx 1 8 Dec 8 2023 sbin -> usr/sbin
    drwxr-xr-x 2 1.0K Feb 2 2024 sbin.usr-is-merged

    PRETTY_NAME="KDE neon 6.2"
    lrwxrwxrwx 1 8 Jun 2 2024 sbin -> usr/sbin
    drwxr-xr-x 2 1.0K Aug 20 2022 sbin.usr-is-merged

    PRETTY_NAME="openSUSE Tumbleweed"
    lrwxrwxrwx 1 8 Sep 25 15:10 sbin -> usr/sbin

    PRETTY_NAME="Fedora Linux 39 (Thirty Nine)"
    lrwxrwxrwx 1 8 Jul 20 2023 sbin -> usr/sbin

    All of these systems have sbin pointing to usr/sbin (the same as
    bookworm), NOT to usr/bin the way pocket's system does.

    pocket's system is the outlier here. It's the only one where there
    isn't a separate usr/sbin.

    .


    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Greg Wooledge@21:1/5 to gene heskett on Thu Dec 5 01:40:01 2024
    On Wed, Dec 04, 2024 at 19:06:40 -0500, gene heskett wrote:
    On 12/3/24 14:32, Greg Wooledge wrote:
    Note that pocket's system has /sbin pointing to usr/bin NOT to usr/sbin.

    On Tue, Dec 03, 2024 at 09:44:32 -0500, Felix Miata wrote:

    Bookworm looks to be one of the last of the Mohicans:

    I'm not sure what you mean by that.

    That, Greg, refers to a less than pleasant time in 'merican history when the only good American indian was a dead one. The Mohicans were [...]

    I'm not talking about the historical reference. I'm talking about
    the assertion that bookworm is an outlier.

    Bookworm is NOT an outlier here. It's just like all the others. It
    has a separate sbin, NOT a subsumed sbin.

    See this part?

    PRETTY_NAME="Debian GNU/Linux trixie/sid"
    lrwxrwxrwx 1 8 Oct 16 2022 sbin -> usr/sbin

    PRETTY_NAME="Ubuntu 24.04.1 LTS"
    lrwxrwxrwx 1 8 Dec 8 2023 sbin -> usr/sbin
    drwxr-xr-x 2 1.0K Feb 2 2024 sbin.usr-is-merged

    PRETTY_NAME="KDE neon 6.2"
    lrwxrwxrwx 1 8 Jun 2 2024 sbin -> usr/sbin
    drwxr-xr-x 2 1.0K Aug 20 2022 sbin.usr-is-merged

    PRETTY_NAME="openSUSE Tumbleweed"
    lrwxrwxrwx 1 8 Sep 25 15:10 sbin -> usr/sbin

    PRETTY_NAME="Fedora Linux 39 (Thirty Nine)"
    lrwxrwxrwx 1 8 Jul 20 2023 sbin -> usr/sbin

    All of these systems have sbin pointing to usr/sbin (the same as
    bookworm), NOT to usr/bin the way pocket's system does.

    Pocket's (Arch??) system is the outlier. Not bookworm.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Greg Wooledge on Thu Dec 5 03:00:01 2024
    On 12/4/24 19:39, Greg Wooledge wrote:
    On Wed, Dec 04, 2024 at 19:06:40 -0500, gene heskett wrote:
    On 12/3/24 14:32, Greg Wooledge wrote:
    Note that pocket's system has /sbin pointing to usr/bin NOT to usr/sbin.

    On Tue, Dec 03, 2024 at 09:44:32 -0500, Felix Miata wrote:

    Bookworm looks to be one of the last of the Mohicans:

    I'm not sure what you mean by that.

    That, Greg, refers to a less than pleasant time in 'merican history when the >> only good American indian was a dead one. The Mohicans were [...]

    I'm not talking about the historical reference. I'm talking about
    the assertion that bookworm is an outlier.

    Also true, I was clarifying the use of
    Mohicans for those not fam with `merican history, and could have been
    confused. I'm with you at least 90% of the time.

    Bookworm is NOT an outlier here. It's just like all the others. It
    has a separate sbin, NOT a subsumed sbin.

    See this part?

    PRETTY_NAME="Debian GNU/Linux trixie/sid"
    lrwxrwxrwx 1 8 Oct 16 2022 sbin -> usr/sbin

    PRETTY_NAME="Ubuntu 24.04.1 LTS"
    lrwxrwxrwx 1 8 Dec 8 2023 sbin -> usr/sbin
    drwxr-xr-x 2 1.0K Feb 2 2024 sbin.usr-is-merged

    PRETTY_NAME="KDE neon 6.2"
    lrwxrwxrwx 1 8 Jun 2 2024 sbin -> usr/sbin
    drwxr-xr-x 2 1.0K Aug 20 2022 sbin.usr-is-merged

    PRETTY_NAME="openSUSE Tumbleweed"
    lrwxrwxrwx 1 8 Sep 25 15:10 sbin -> usr/sbin

    PRETTY_NAME="Fedora Linux 39 (Thirty Nine)"
    lrwxrwxrwx 1 8 Jul 20 2023 sbin -> usr/sbin

    All of these systems have sbin pointing to usr/sbin (the same as
    bookworm), NOT to usr/bin the way pocket's system does.

    Pocket's (Arch??) system is the outlier. Not bookworm.

    .


    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Stone@21:1/5 to eben@gmx.us on Thu Dec 5 16:10:01 2024
    On Thu, Dec 05, 2024 at 09:42:08AM -0500, eben@gmx.us wrote:
    Is it different when you boot from an nvme drive? I have what I was told
    was one and it appears as /dev/sdb or /dev/sda depending how the OS feels >that day. I didn't buy it new, it was given to me, so I may have been >misinformed. It's a thing that looks like a SIMM, and when it's plugged in >the motherboard disables one of the SATA ports, which is unfortunate.

    That is a SATA SSD, not an NVMe. The same physical form factor (M.2)
    supports either, but a particular drive will be one or the other. The
    SATA drive letters can change based on things like which drive starts up
    faster or what removeable devices are plugged in, which is why using
    UUIDs or somesuch is preferred over using the device name.

    (SATA and NVMe are both SSDs, but one accesses the storage via a SATA controller and the other appears directly on a PCIe bus. They're
    functionally equivalent in a consumer context, but SATA reached the end
    of the road performance-wise in 2009 while PCIe continues to scale up;
    SATA maxes out at 600MB/s, while PCIe is currently at 4000MB/s per lane,
    with NVMe drives typically using as many as 4 lanes [16000MB/s]. Latency is also
    significantly lower for PCIe. For many [most?] consumer applications the differences will not be noticable.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From eben@gmx.us@21:1/5 to Michael Stone on Thu Dec 5 15:50:01 2024
    On 12/4/24 18:18, Michael Stone wrote:

    One somewhat different thing is the
    concept of NVMe namespaces: your drive will be /dev/nvme0, but you'll probably be using /dev/nvme0n1 except for device management. Partitions then look like /dev/nvme0n1p1.

    Is it different when you boot from an nvme drive? I have what I was told
    was one and it appears as /dev/sdb or /dev/sda depending how the OS feels
    that day. I didn't buy it new, it was given to me, so I may have been misinformed. It's a thing that looks like a SIMM, and when it's plugged in
    the motherboard disables one of the SATA ports, which is unfortunate.

    eben@cerberus:~$ lsb_release --description
    No LSB modules are available.
    Description: Debian GNU/Linux 12 (bookworm)

    eben@cerberus:~$ uname -r
    6.1.0-27-amd64

    eben@cerberus:~$ sudo hdparm -i /dev/sdb

    /dev/sdb:

    Model=TOSHIBA KSG60ZMV256G M.2 2280 256GB, FwRev=ABDA4102, SerialNo=584B8018K5SP
    Config={ Fixed }
    RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0
    BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=off
    CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=500118192
    IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
    PIO modes: pio0 pio3 pio4
    DMA modes: mdma0 mdma1 mdma2
    UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5
    AdvancedPM=no WriteCache=enabled
    Drive conforms to: Unspecified: ATA/ATAPI-3,4,5,6,7

    * signifies the current active mode

    eben@cerberus:~$ sudo hdparm -t /dev/sdb

    /dev/sdb:
    Timing buffered disk reads: 886 MB in 3.01 seconds = 294.77 MB/sec

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Stone@21:1/5 to Bruno Schneider on Thu Dec 5 16:50:01 2024
    On Mon, Dec 02, 2024 at 03:41:43PM -0300, Bruno Schneider wrote:
    I would recommend changing from UUID to labels. Doing so, all you need
    to worry is that the new partitions have the same labels as the old
    ones.
    https://wiki.debian.org/fstab#Labels

    I personally prefer UUIDs because the odds of an existing drive from a different system having a conflicting UUID when you put it in another
    system is near zero while the odds that another drive would have
    something like LABEL=root is very high. The installer adds UUIDs by
    default so most people will never need to decide or even think about
    this. In practical terms, in most cases, it doesn't matter which you
    use. If you use something like LVM it really doesn't matter as you'll
    just use the LV name (and in practice, this is what I do on any system
    that's not just one big partition). Arguing over one vs the other is
    pointless as there's no obvious right answer so much as there is
    personal preference; the only wrong answer is using the device name. :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From eben@gmx.us@21:1/5 to Michael Stone on Thu Dec 5 17:00:01 2024
    On 12/5/24 09:59, Michael Stone wrote:
    On Thu, Dec 05, 2024 at 09:42:08AM -0500, eben@gmx.us wrote:
    Is it different when you boot from an nvme drive? I have what I was
    told was one and it appears as /dev/sdb or /dev/sda depending how the
    OS feels that day. I didn't buy it new, it was given to me, so I may
    have been misinformed. It's a thing that looks like a SIMM, and when
    it's plugged in the motherboard disables one of the SATA ports, which
    is unfortunate.

    That is a SATA SSD, not an NVMe.

    Interesting, thanks. Apparently either it was misrepresented to me, or I misremembered. That explains some stuff.

    The SATA drive letters can change based on things like which drive starts
    up faster or what removeable devices are plugged in, which is why using
    UUIDs or somesuch is preferred over using the device name.

    And that probably explains why it's always sdb under a rescue thumb drive, because that environment doesn't automount _anything_.

    SATA maxes out at 600MB/s, while PCIe is currently at 4000MB/s per lane,
    with NVMe drives typically using as many as 4 lanes [16000MB/s].

    How do I tell how many lanes a given drive uses (preferably before purchase)?

    For many [most?] consumer applications the differences will not be noticable.)

    Yeah, I probably wouldn't be able to tell. It's just geek points. I was thinking it might matter when xferring gigabyte+ files to the media server,
    but then the bottleneck would either be the CPU encrypting the SSH data, or
    the network itself.

    Is one kind more long-lived than the other?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to All on Thu Dec 5 17:20:01 2024
    That is a SATA SSD, not an NVMe.
    Interesting, thanks. Apparently either it was misrepresented to me, or I misremembered. That explains some stuff.

    The switch from SATA to the NVMe interface/protocol happened basically
    at the same time as the switch from the 2.5" (and mini-pcie) to the M.2
    format, so it's a common mistake to consider that for an SSD, "M.2 =>
    NVMe" (the implication is currently true in the other direction, tho,
    AFAIK).

    For many [most?] consumer applications the differences will not be
    noticable.)

    That's my experience as well.

    Is one kind more long-lived than the other?

    As a general rule, no, tho you could argue that by virtue of imposing
    slower writes, the SATA interface can lead to a longer lifetime just
    because it takes longer to reach the "TBW" limit. 🙂


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Ritter@21:1/5 to Stefan Monnier on Thu Dec 5 18:00:04 2024
    Stefan Monnier wrote:
    That is a SATA SSD, not an NVMe.
    Interesting, thanks. Apparently either it was misrepresented to me, or I misremembered. That explains some stuff.

    The switch from SATA to the NVMe interface/protocol happened basically
    at the same time as the switch from the 2.5" (and mini-pcie) to the M.2 format, so it's a common mistake to consider that for an SSD, "M.2 =>
    NVMe" (the implication is currently true in the other direction, tho,
    AFAIK).

    Not at all. We have many servers with U.2 and U.3 format disks,
    which look like classic 2.5" SSDs but use NVMe PCIe connections.

    I suspect there are few desktops (mostly 'workstation' class
    machines) and no laptops using U.2.

    -dsr-

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Thu Dec 5 18:30:02 2024
    Michael Stone composed on 2024-12-05 10:42 (UTC-0500):

    https://wiki.debian.org/fstab#Labels

    I personally prefer UUIDs because the odds of an existing drive from a different system having a conflicting UUID when you put it in another
    system is near zero while the odds that another drive would have
    something like LABEL=root is very high.

    Clearly, because it's a seriously inept volume LABEL selection. Among the following are some better, yet easy enough to remember and type, examples:
    # egrep -i 'deb11|deb 11|seye|bull|debian11|debian 11' *L*txt | grep ├─ | wc -l
    26
    # egrep -i 'deb11|deb 11|seye|bull|debian11|debian 11' *L*txt | grep ├─ a-865L10.txt:├─sda28 ext4 SS25deb11 cb7dac29-… ab250L26.txt:├─nvme0n1p14 ext4 pt3p14deb11 889fea98-… ab560L10.txt:├─nvme0n1p14 ext4 tm8p14deb11 78980253-… ab85mL14.txt:├─sda17 ext4 tg1p17deb11 53725495-… asa88L08.txt:├─sda14 ext4 tvgp14deb11 2c533ad4-… big31L52.txt:├─sda28 ext4 p61bullseye 8718ac45-… big41L51.txt:├─sda25 ext4 i256bullseye 274996ea-… fi965L26.txt:├─sda17 ext4 w71bullseye c3b75320-… g5easL34.txt:├─sda11 ext4 m25p11deb11 8cb07113-… ga88xL01.txt:├─sda14 ext4 tvgp14deb11 2c533ad4-… gb970L12.txt:├─sda24 ext4 gs5p24deb11 f890a134-… gx270L14.txt:├─sda31 ext4 debian11 7b4a7828-…
    gx27bL23.txt:├─sda33 ext4 33deb11 d25ab64a-…
    gx27cL20.txt:├─sda32 ext3 deb11p32 307f2bcb-…
    gx280L27.txt:├─sda21 ext4 21deb11 908f51ef-…
    gx28bL21.txt:├─sda24 ext3 s16d-deb11 8711d07c-… gx320L15.txt:├─sda34 ext4 sbyd-deb11 bfc2b8a0-… gx62bL32.txt:├─sda35 ext4 t87p35deb11 7b6de942-… gx780L30.txt:├─sda25 ext4 25deb11 79dcb3f6-…
    gx78bL12.txt:├─sda16 ext4 p256p16deb11 6c2f8ee8-… hp750L05.txt:├─sda14 ext4 st20deb11 32f05d14-…
    hp945L15.txt:├─sda15 ext4 h8sbullseye 2f1ef2a2-… m7ncdL29.txt:├─sda44 ext3 H16Adeb11 7c5fd253-…
    mcp61L19.txt:├─sdb21 ext4 debian11 367e0348-…
    msi85L11.txt:├─sda10 ext4 sp25p10deb11 d8be9f22-…
    #

    The *L*txt files are automatically generated partitioner[1] logs with
    both both parted -l and lsblk -f output appended, which I use for keeping
    track of what's installed where here. Strings like pt3, tm8, m25 & sbyd
    above are extractions from disk model and/or serial numbers.

    [1] http://www.dfsee.com/
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to All on Thu Dec 5 18:20:01 2024
    "M.2 => NVMe" (the implication is currently true in the other
    direction, tho, AFAIK).
    Not at all. We have many servers with U.2 and U.3 format disks,
    which look like classic 2.5" SSDs but use NVMe PCIe connections.

    Aha! Thanks for setting me straight!


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Green@21:1/5 to Dan Ritter on Thu Dec 5 18:50:01 2024
    Dan Ritter <dsr@randomstring.org> wrote:
    Stefan Monnier wrote:
    That is a SATA SSD, not an NVMe.
    Interesting, thanks. Apparently either it was misrepresented to me, or I misremembered. That explains some stuff.

    The switch from SATA to the NVMe interface/protocol happened basically
    at the same time as the switch from the 2.5" (and mini-pcie) to the M.2 format, so it's a common mistake to consider that for an SSD, "M.2 =>
    NVMe" (the implication is currently true in the other direction, tho, AFAIK).

    Not at all. We have many servers with U.2 and U.3 format disks,
    which look like classic 2.5" SSDs but use NVMe PCIe connections.

    I suspect there are few desktops (mostly 'workstation' class
    machines) and no laptops using U.2.

    As I understand it the slots in the M2 SSD connector can tell whether
    it's SATA or NVMe or both. I have an M2 SSD which I believe will work
    either with a SATA connection or with NVMe, and it has two slots in
    its connector.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Stone@21:1/5 to eben@gmx.us on Thu Dec 5 19:10:01 2024
    On Thu, Dec 05, 2024 at 10:55:48AM -0500, eben@gmx.us wrote:
    How do I tell how many lanes a given drive uses (preferably before purchase)?

    It would be buried in the technical docs. I've only seen 4x drives (but
    I'm sure there may be some cheaper drives with fewer). On the
    motherboard side it's common to see 2 lanes in some slots for the simple
    reason that there are a limited number of lanes from the CPU--most
    people would rather have a slower-connected drive than none at all.
    Having 2 lanes may not even be a limitation: 2 PCIe v3 lanes are the
    same speed as 4 v2 lanes. The bandwith to the drive is rarely a
    bottleneck, especially at the desktop level. For best results plug your
    drive into the motherboard slot with the largest number of the highest
    version lanes. A lower version drive can be used in a higher version
    slot with no penalty, and a higher version drive can be used in a lower
    version slot but will run each lane at the lower speed and will have
    half the theoretical performance, or less.

    E.g.: my motherboard has something like 4x v5 + 4x v4 + 2x v4 + 4x v3.
    Let's say I have 2 v4 drives and 1 v3 drive. If I put one v4 drive in
    the 4x v5 slot, one in the 4x v4 slot, and the v3 drive in the 4x v3
    slot, all the drives will operate at their peak efficiency. If I put a
    4x v4 drive in the 2x v4 or 4x v3 slot, it will operate at the same
    lower level (half the peak bandwidth). Also, if I put the v3 drive in
    the 2x v4 slot it will only be able to use half of its bandwidth,
    because it will only run at 2x v3 (as it is a v3 drive). Bottom line,
    it's worth checking the motherboard documentation if you have multiple
    M.2 slots, but only because it costs nothing to do so.

    Yeah, I probably wouldn't be able to tell. It's just geek points. I was >thinking it might matter when xferring gigabyte+ files to the media server, >but then the bottleneck would either be the CPU encrypting the SSH data, or >the network itself.

    Yes. Also not many drives can sustain a multi-gigabyte write rate
    anyway, and if you're just talking bursts most situations won't
    differentiate between moving 200MB in .1s vs 1s as the write is
    generally buffered by the OS. So where the peak speed matters on the
    desktop is mostly in very large reads with no writing, which just don't
    happen much. Basically game startup, but the game itself is probably not written to depend on 16GB/s because most people wouldn't be able to run
    it. Once you're beyond the 600MB/s of SATA into any NVMe you've hit the
    point of diminishing returns.

    Is one kind more long-lived than the other?

    Not due specifically to the interface. At the same price point you'll
    probably have similar longevity, though sata drives are moving in the
    direction of less bang for the buck because there aren't many new ones
    being developed and the sales volume is going NVMe.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Stone@21:1/5 to Felix Miata on Thu Dec 5 19:20:01 2024
    On Thu, Dec 05, 2024 at 12:24:36PM -0500, Felix Miata wrote:
    Clearly, because it's a seriously inept volume LABEL selection. Among the >following are some better, yet easy enough to remember and type, examples:
    # egrep -i 'deb11|deb 11|seye|bull|debian11|debian 11' *L*txt | grep ├─ | wc -l
    26
    # egrep -i 'deb11|deb 11|seye|bull|debian11|debian 11' *L*txt | grep ├─ >a-865L10.txt:├─sda28 ext4 SS25deb11 cb7dac29-… >ab250L26.txt:├─nvme0n1p14 ext4 pt3p14deb11 889fea98-…
    [snip]

    Never have I felt any need or desire to do anything like that. If I did,
    it would be on an LVM, not on dozens of partitions.

    The *L*txt files are automatically generated partitioner[1] logs with
    both both parted -l and lsblk -f output appended, which I use for keeping >track of what's installed where here. Strings like pt3, tm8, m25 & sbyd
    above are extractions from disk model and/or serial numbers.

    Perhaps we can agree to disagree on what's easy. IMO, your labels are
    basically as opaque as a UUID, even if systemic, but with the
    disadvantage of needing more effort to genreate. :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans@21:1/5 to All on Thu Dec 5 20:30:01 2024
    Hi folks,

    as promised I send you my experiences with cloning to NVME.

    So, today I got my new notebook. As I never used UEFI, I disabled UEFI in BIOS (my first mistake!), then cloned everything to the new drive.

    Firts reboot worked well, no problems. But then I realized, that if you want NVME mode, you MUST use native UEFI in BIOS settings.

    However, doing so, neither Debian nor Windows will boot. Of course: There is
    no EFI partition on my harddrive, as I never needed one (still).

    Now I am hasseling with the drive, as I want NVME-mode of course, because it
    is faster. And of course, I do not want to reinstall everything!

    I saw some documentations, how to get EFI on the drive, but it looks, you need a seperate partition with FAT to get EFI on, right?

    However, I saw also the possibility to get EFI on my seperate /boot partition.

    What can I do? I would like to keep the existing partitions. However, I could shrink them. At the moment, my drive looks at this:

    primary partition Windows-boot ntfs
    primary partition Windows ntfs
    primary partition /boot /dev/sda3 ext4
    extended partition /dev/sda4
    logical partition /dev/sda5 swap
    logical partition /dev/sda6 / ext4
    logical partition /dev/sda7 encrypted home
    logical partition /dev/sda8 encrypted usr
    logical partition /dev/sda9 encrypted var
    logical partition /dev/sda10 encrypted data

    So I could shrinken some partitions and create a new logical one.

    Other option would be, delete "swap" partition and make a new "EFI" partition.

    What do you think, might be the best way?

    Some better ideas?

    Thanks for reading this.

    Best regards

    Hans

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Thu Dec 5 20:20:01 2024
    Michael Stone composed on 2024-12-05 13:13 (UTC-0500):

    On Thu, Dec 05, 2024 at 12:24:36PM -0500, Felix Miata wrote:

    Clearly, because it's a seriously inept volume LABEL selection. Among the >>following are some better, yet easy enough to remember and type, examples: >># egrep -i 'deb11|deb 11|seye|bull|debian11|debian 11' *L*txt | grep ├─ | wc -l
    26
    # egrep -i 'deb11|deb 11|seye|bull|debian11|debian 11' *L*txt | grep ├─ >>a-865L10.txt:├─sda28 ext4 SS25deb11 cb7dac29-… >>ab250L26.txt:├─nvme0n1p14 ext4 pt3p14deb11 889fea98-…
    [snip]

    Never have I felt any need or desire to do anything like that. If I did,
    it would be on an LVM, not on dozens of partitions

    I have more than 40 PCs with well in excess of a dozen installed distros, each on
    a partition, readily cloned as element of backup system or seeding a new PC. I've
    never imagined any remotely simple way cloning (from outside an involved OS) could
    work with LVM employed.

    The *L*txt files are automatically generated partitioner[1] logs with
    both both parted -l and lsblk -f output appended, which I use for keeping >>track of what's installed where here. Strings like pt3, tm8, m25 & sbyd >>above are extractions from disk model and/or serial numbers.

    Perhaps we can agree to disagree on what's easy. IMO, your labels are basically as opaque as a UUID, even if systemic, but with the
    disadvantage of needing more effort to genreate. :-)

    Generate too, big deal. I get cross-eyed looking at them. LOL

    8 or 13 characters I can remember, recognize and type within a 40 entry custom.cfg
    or 40_custom, among other places, such as
    # wc -l /etc/fstab
    155 /etc/fstab
    #

    I'm annoyed constantly in help forums, where scrolling is required, or wrapping occurs, because of 36 character UUID string pollution functioning as yet another
    "personally identifiable information"[1] data element for the data scrapers.

    [1] https://en.wikipedia.org/wiki/Personal_identifier
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to All on Thu Dec 5 20:50:01 2024
    As I understand it the slots in the M2 SSD connector can tell whether
    it's SATA or NVMe or both. I have an M2 SSD which I believe will work
    either with a SATA connection or with NVMe, and it has two slots in
    its connector.

    IIUC the M.2 slot into which you insert the SSD can support either SATA,
    or NVMe, or both (depending on the slot), but I have not yet seen any
    M.2 SSD drive which works with both SATA and NVMe.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Stone@21:1/5 to Chris Green on Thu Dec 5 21:00:01 2024
    On Thu, Dec 05, 2024 at 05:22:50PM +0000, Chris Green wrote:
    As I understand it the slots in the M2 SSD connector can tell whether
    it's SATA or NVMe or both. I have an M2 SSD which I believe will work
    either with a SATA connection or with NVMe, and it has two slots in
    its connector.

    The M.2 drive will be either NVMe or SATA, I've never heard of one that
    does both (would add cost for no benefit over two simpler cards). The
    M.2 slot can support one or the other or both. SATA drives will have two notches (B+M key), NVMe is usually one (M key), but there are two-notch
    drives (B+M key) intended to be usable in M.2 slots actually intended
    for low bandwidth network cards (B key slot without SATA support). The
    M.2 keying situation is generally a bit of a mess and there's no
    guarantee that a drive that physically fits into a slot will actually
    work, while there are configurations which logically work but need a
    physical converter. I assume this is because people did things not
    originally expected by the spec to allow more flexible use of slots, but
    the result is that the keying makes things more confusing rather than simplifying anything.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Stone@21:1/5 to Felix Miata on Thu Dec 5 21:00:01 2024
    On Thu, Dec 05, 2024 at 02:15:13PM -0500, Felix Miata wrote:
    I have more than 40 PCs with well in excess of a dozen installed distros, each on
    a partition,

    You have a unique set of requirements. Probably that has little
    relevance to basically anyone else.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andrew M.A. Cater@21:1/5 to Hans on Thu Dec 5 21:10:01 2024
    On Thu, Dec 05, 2024 at 08:24:05PM +0100, Hans wrote:
    Hi folks,

    as promised I send you my experiences with cloning to NVME.

    So, today I got my new notebook. As I never used UEFI, I disabled UEFI in BIOS
    (my first mistake!), then cloned everything to the new drive.

    Firts reboot worked well, no problems. But then I realized, that if you want NVME mode, you MUST use native UEFI in BIOS settings.

    However, doing so, neither Debian nor Windows will boot. Of course: There is no EFI partition on my harddrive, as I never needed one (still).


    If you still have the drive you cloned from set it aside.

    If you need dual boot, set UEFi up as the mode to boot into in firmware.

    Use the Microsoft tools to create a Windows .iso file

    Install Windows from a .iso file. Use Windows drive tools to shrink Windows
    on the drive to make some space.

    Then use something like gparted to move the Windows to the end of the drive.

    Install Debian on the first half of the drive: allow os-prober to find
    the Windows partition.

    Then you've got dual boot. This is the routine I've been through a couple
    of times with a refurbished laptop where the vendor has installed Windows
    in legacy MBR mode.

    Then copy your data across from the drive you set aside. It's a huge pain
    but it's actually worth it IMHO.

    All the very best, as ever,

    Andy
    (amacater@debian.org)
    Now I am hasseling with the drive, as I want NVME-mode of course, because it is faster. And of course, I do not want to reinstall everything!

    I saw some documentations, how to get EFI on the drive, but it looks, you need
    a seperate partition with FAT to get EFI on, right?

    However, I saw also the possibility to get EFI on my seperate /boot partition.

    What can I do? I would like to keep the existing partitions. However, I could
    shrink them. At the moment, my drive looks at this:

    primary partition Windows-boot ntfs
    primary partition Windows ntfs
    primary partition /boot /dev/sda3 ext4
    extended partition /dev/sda4
    logical partition /dev/sda5 swap
    logical partition /dev/sda6 / ext4
    logical partition /dev/sda7 encrypted home
    logical partition /dev/sda8 encrypted usr
    logical partition /dev/sda9 encrypted var
    logical partition /dev/sda10 encrypted data

    So I could shrinken some partitions and create a new logical one.

    Other option would be, delete "swap" partition and make a new "EFI" partition.

    What do you think, might be the best way?

    Some better ideas?

    Thanks for reading this.

    Best regards

    Hans






    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Stone@21:1/5 to Hans on Thu Dec 5 21:20:01 2024
    On Thu, Dec 05, 2024 at 08:24:05PM +0100, Hans wrote:
    What can I do? I would like to keep the existing partitions. However, I could >shrink them. At the moment, my drive looks at this:

    primary partition Windows-boot ntfs
    primary partition Windows ntfs
    primary partition /boot /dev/sda3 ext4
    extended partition /dev/sda4
    logical partition /dev/sda5 swap
    logical partition /dev/sda6 / ext4
    logical partition /dev/sda7 encrypted home
    logical partition /dev/sda8 encrypted usr
    logical partition /dev/sda9 encrypted var
    logical partition /dev/sda10 encrypted data

    You can't just dd the disks if you have an old style dos partition
    table, you need to create a GPT partition table on the new drive, then
    dd the individual partitions. I'm unaware of a tool that would automated
    this, though one may exist. You don't need a separate /boot for the
    scenario above, and you could turn partition 3 into the EFI partition
    (moving the stuff currently in /boot into /boot on the / drive.) This is technically straightforward, but there are a lot of fiddly bits, and I
    have no idea whether windows would still work. If you had just linux and
    a couple of partitions it would be much easier.

    Honestly, you're in partition hell and I'd start over with fewer.
    (Though it is possible to keep this structure.) Maybe use a windows disk migration tool to copy the windows stuff to the new drive, then create a
    new empty encrypted / + boot + EFI, sync the current / to it, then sync
    the other partitions to new /. There are several ways to solve this
    problem but none automated or simple. When you first talked about
    migrating I kinda assumed a typical install with a small number of linux partitions, not this. :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans@21:1/5 to All on Thu Dec 5 21:30:01 2024
    Aargh! I just discovered, the seller did not send the notebook as ordered. I ordered with NVME and he sent with a SATA SSD (checked the SSD, and yes, it
    has TWO nicks, which should be one (for NVME).

    Tomorrow I will contact the seller and maybe return the notebook.

    However, I will keep you informed!

    There were a lot of verry good hints from you in the last days. This helped very much, especially for understanding.

    Can not enough say thank you for it.

    We will see, what happens the next days.

    Good, that I have a backup. Backup is always a good thing, always.

    Best

    Hans

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From eben@gmx.us@21:1/5 to Michael Stone on Thu Dec 5 22:10:01 2024
    On 12/5/24 13:07, Michael Stone wrote:
    On Thu, Dec 05, 2024 at 10:55:48AM -0500, eben@gmx.us wrote:
    How do I tell how many lanes a given drive uses (preferably before purchase)?

    It would be buried in the technical docs. I've only seen 4x drives (but I'm sure there may be some cheaper drives with fewer). On the motherboard side it's common to see 2 lanes in some slots for the simple reason that there
    are a limited number of lanes from the CPU--most people would rather have a slower-connected drive than none at all.

    To find out if the motherboard imposed any limitations, I checked the
    manual. I found these tables, which I can't see the implications of:

    M2D_32G M.2 connector +-------------+---------+---------+---------+---------+---------+---------+
    |\ Connector | SATA3_0 | SATA3_1 | SATA3_2 | SATA3_3 | SATA3_4 | SATA3_5 |
    + \----------\+---------+---------+---------+---------+---------+---------+
    | Type of SSD | SATA_Express | SATA_Express | - | +-------------+---------+---------+---------+---------+---------+---------+
    | SATA SSD | OK | OK | OK | X | OK | OK | +-------------+---------+---------+---------+---------+---------+---------+
    | | OK | OK | - | +-------------+---------+---------+---------+---------+---------+---------+
    | PCIe x4 SSD | X | X | X | X | OK | OK | +-------------+---------+---------+---------+---------+---------+---------+
    | | OK (note) | X | - | +-------------+---------+---------+---------+---------+---------+---------+
    | PCIe x2 SSD | OK | OK | X | X | OK | OK | +-------------+---------+---------+---------+---------+---------+---------+
    | | OK | X | - | +-------------+---------+---------+---------+---------+---------+---------+

    Note: The PCIe x4 SSD runs at x2 speed.

    M2A_32G M.2 connector +-------------+---------+---------+---------+---------+---------+---------+
    |\ Connector | SATA3_0 | SATA3_1 | SATA3_2 | SATA3_3 | SATA3_4 | SATA3_5 |
    + \----------\+---------+---------+---------+---------+---------+---------+
    | Type of SSD | SATA_Express | SATA_Express | - | +-------------+---------+---------+---------+---------+---------+---------+
    | SATA SSD | no | OK | OK | X | OK | OK | +-------------+---------+---------+---------+---------+---------+---------+
    | | OK | OK | - | +-------------+---------+---------+---------+---------+---------+---------+
    | PCIe x4 SSD | OK | OK | OK | OK | OK | OK | +-------------+---------+---------+---------+---------+---------+---------+
    | | OK | OK | - | +-------------+---------+---------+---------+---------+---------+---------+
    | PCIe x2 SSD | OK | OK | OK | OK | OK | OK | +-------------+---------+---------+---------+---------+---------+---------+
    | | OK | OK | - | +-------------+---------+---------+---------+---------+---------+---------+

    Yes, the tables were in that order. Not sure why. In the book "OK" and "X" were a checkmark and a times-X respectively, but they're hard to type.

    In each table, the even-numbered rows were darker grey, so I guess they go together. It gets confusing when they try to re-use the table's structure
    for (mostly) unrelated data. I don't know why they didn't just make the
    tables two columns wider and half as tall. On a side note, what are SATA Express ports good for? They're narrower than standard SATA ports.

    Anyhow it looks like M2A_32G is more capable in general, but there are weird restrictions everywhere. Also it looks like there's a way to assign what's
    in the M.2 slot to another SATA port, and I need to find out how that's
    done, if I should acquire another M.2 drive.

    E.g.: my motherboard has something like 4x v5 + 4x v4 + 2x v4 + 4x v3. Let's say I have 2 v4 drives and 1 v3 drive. If I put one v4 drive in the 4x v5 slot, one in the 4x v4 slot, and the v3 drive in the 4x v3 slot, all the drives will operate at their peak efficiency. If I put a 4x v4 drive in the 2x v4 or 4x v3 slot, it will operate at the same lower level (half the peak bandwidth). Also, if I put the v3 drive in the 2x v4 slot it will only be able to use half of its bandwidth, because it will only run at 2x v3 (as it is a v3 drive). Bottom line, it's worth checking the motherboard documentation if you have multiple M.2 slots, but only because it costs nothing to do so.

    Man, I need to play with some better gear. This is almost entirely academic.

    Is one kind more long-lived than the other?

    Not due specifically to the interface. At the same price point you'll probably have similar longevity, though sata drives are moving in the direction of less bang for the buck because there aren't many new ones being developed and the sales volume is going NVMe.

    Right, I've already had to go from 1T spinning-rust drives to 2T, not
    because I was running out of room, but because the selection of 1T drives
    was so paltry. Generally I don't mind things being slow (I'm used to
    dealing with slow computers), but money is in short supply and cool hardware costs money. So I'm more of a trailing-edge consumer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pocket@homemail.com@21:1/5 to All on Thu Dec 5 22:20:01 2024
    Sent: Thursday, December 05, 2024 at 2:24 PM
    From: "Hans" <hans.ullrich@loop.de>
    To: debian-user@lists.debian.org
    Subject: Re: From SSD to NVME

    Hi folks,

    as promised I send you my experiences with cloning to NVME.

    So, today I got my new notebook. As I never used UEFI, I disabled UEFI in BIOS
    (my first mistake!), then cloned everything to the new drive.

    Firts reboot worked well, no problems. But then I realized, that if you want NVME mode, you MUST use native UEFI in BIOS settings.

    However, doing so, neither Debian nor Windows will boot. Of course: There is no EFI partition on my harddrive, as I never needed one (still).

    Now I am hasseling with the drive, as I want NVME-mode of course, because it is faster. And of course, I do not want to reinstall everything!

    I saw some documentations, how to get EFI on the drive, but it looks, you need
    a seperate partition with FAT to get EFI on, right?

    However, I saw also the possibility to get EFI on my seperate /boot partition.

    What can I do? I would like to keep the existing partitions. However, I could
    shrink them. At the moment, my drive looks at this:

    primary partition Windows-boot ntfs
    primary partition Windows ntfs
    primary partition /boot /dev/sda3 ext4
    extended partition /dev/sda4
    logical partition /dev/sda5 swap
    logical partition /dev/sda6 / ext4
    logical partition /dev/sda7 encrypted home
    logical partition /dev/sda8 encrypted usr
    logical partition /dev/sda9 encrypted var
    logical partition /dev/sda10 encrypted data

    So I could shrinken some partitions and create a new logical one.

    Other option would be, delete "swap" partition and make a new "EFI" partition.

    What do you think, might be the best way?

    Some better ideas?

    Thanks for reading this.

    Best regards

    Hans


    Well it looks like you have a big job ahead of you.

    Shrinking the swap partition is the easy way to get some room as you can create a swap file to take its place.

    The real issue is that the efi partition if I recall correctly has to be a primary partition.

    Can you go to GPT istead of MSDOS MBR?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Wright@21:1/5 to Andrew M.A. Cater on Thu Dec 5 22:20:01 2024
    On Thu 05 Dec 2024 at 20:01:29 (+0000), Andrew M.A. Cater wrote:

    Use the Microsoft tools to create a Windows .iso file

    Install Windows from a .iso file. Use Windows drive tools to shrink Windows on the drive to make some space.

    Then use something like gparted to move the Windows to the end of the drive.

    Do you mean specifically the end of the drive,
    or just at one end or the other? Reasoning?

    Cheers,
    David.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Thu Dec 5 22:20:01 2024
    Michael Stone composed on 2024-12-05 14:50 (UTC-0500):

    On Thu, Dec 05, 2024 at 02:15:13PM -0500, Felix Miata wrote:

    I have more than 40 PCs with well in excess of a dozen installed distros, each on
    a partition,

    You have a unique set of requirements. Probably that has little
    relevance to basically anyone else.

    I hear identifying hardware problems using VMs is rather problematic - no such obstacle here. :)

    Michael Stone composed on 2024-12-05 15:12 (UTC-0500):

    You can't just dd the disks if you have an old style dos partition
    table, you need to create a GPT partition table on the new drive, then
    dd the individual partitions. I'm unaware of a tool that would automated this, though one may exist.

    At least one does. I provided URL to the one I use, for some definition of "automated", upthread @2024-12-05 12:24 (UTC-0500) in reply to your post 102 minutes earlier. :)
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Wright@21:1/5 to Hans on Thu Dec 5 22:20:01 2024
    On Thu 05 Dec 2024 at 20:24:05 (+0100), Hans wrote:

    So, today I got my new notebook. As I never used UEFI, I disabled UEFI in BIOS
    (my first mistake!), then cloned everything to the new drive.

    Why did you stick with MBR partitioning rather than GPT?

    Now I am hasseling with the drive, as I want NVME-mode of course, because it is faster. And of course, I do not want to reinstall everything!

    It might be a good thing that you are starting over.

    I saw some documentations, how to get EFI on the drive, but it looks, you need
    a seperate partition with FAT to get EFI on, right?

    However, I saw also the possibility to get EFI on my seperate /boot partition.

    With an ESP, you wouldn't need a separate /boot partition.

    What can I do? I would like to keep the existing partitions. However, I could
    shrink them. At the moment, my drive looks at this:

    primary partition Windows-boot ntfs
    primary partition Windows ntfs
    primary partition /boot /dev/sda3 ext4
    extended partition /dev/sda4
    logical partition /dev/sda5 swap
    logical partition /dev/sda6 / ext4
    logical partition /dev/sda7 encrypted home
    logical partition /dev/sda8 encrypted usr
    logical partition /dev/sda9 encrypted var
    logical partition /dev/sda10 encrypted data

    Why do you want so many separate partitions on a notebook?

    What do you think, might be the best way?

    Some better ideas?

    I see that Andrew posted a summary of how you could proceed.
    Others might do the same. But there were so many pitfalls
    in your first attempt, would it not be sensible for you to
    post, in a little more detail, how you intend to build the next
    machine, invite criticism, and then refine your plan, rather
    than just going for the "big reveal" in a few days time.

    Personally, my biggest worry would be dealing with Windows.
    But I don't know what resources you have for that.

    Cheers,
    David.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Thu Dec 5 22:30:01 2024
    pocket composed on 2024-12-05 22:17 (UTC+0100):

    The real issue is that the efi partition if I recall correctly has to be a primary partition.

    The ESP filesystem must be on a GPT partition. GPT is compatible with legacy/BIOS
    booting, but not the other way around. It exists because UEFI requires it.
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Thu Dec 5 22:30:01 2024
    Felix Miata (12024-12-05):
    The ESP filesystem must be on a GPT partition.

    Not always.

    Regards,

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Thu Dec 5 22:40:01 2024
    Nicolas George composed on 2024-12-05 22:28 (UTC+0100):

    Felix Miata:

    The ESP filesystem must be on a GPT partition.

    Not always.

    Where else is possible?
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Stone@21:1/5 to Felix Miata on Thu Dec 5 23:00:01 2024
    On Thu, Dec 05, 2024 at 04:16:53PM -0500, Felix Miata wrote:
    At least one does. I provided URL to the one I use, for some definition of >"automated", upthread @2024-12-05 12:24 (UTC-0500) in reply to your post 102 >minutes earlier. :)

    Automated means something along the lines of "make this mbr disk into a
    working gpt disk at the push of a button". I dont see that your tool
    does that, but I'm also not interested in digging through the
    documentation in any detail.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andrew M.A. Cater@21:1/5 to David Wright on Thu Dec 5 23:10:01 2024
    On Thu, Dec 05, 2024 at 03:15:36PM -0600, David Wright wrote:
    On Thu 05 Dec 2024 at 20:01:29 (+0000), Andrew M.A. Cater wrote:

    Use the Microsoft tools to create a Windows .iso file

    Install Windows from a .iso file. Use Windows drive tools to shrink Windows on the drive to make some space.

    Then use something like gparted to move the Windows to the end of the drive.

    Do you mean specifically the end of the drive,
    or just at one end or the other? Reasoning?


    1. Install Windows to the whole drive - it's what Windows does :)
    2. Use gparted to move Windows (maybe apart from the EFI partition) to the
    end of the drive - move the blank space to the front of the drive after
    the EFI partiton.
    3. Install Debian in the blank space.

    You might be able to do it all with one EFI partition. I think I found it easier to put Windows on first - because it's fussy, then install Debian
    but I may have done it both ways round in the past. Installing Windows
    second is definitely harder if I recall correctly.

    Andy
    (amacater@debian.org)

    Cheers,
    David.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Thu Dec 5 23:10:02 2024
    Felix Miata (12024-12-05):
    Where else is possible?

    Depends on the firmware, of course. If you try to put a GPT on the drive
    of a Lenovo Miix 3-1030, it will not boot.

    Regards,

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Stone@21:1/5 to Andrew M.A. Cater on Thu Dec 5 23:40:01 2024
    On Thu, Dec 05, 2024 at 10:03:52PM +0000, Andrew M.A. Cater wrote:
    2. Use gparted to move Windows (maybe apart from the EFI partition) to the
    end of the drive - move the blank space to the front of the drive after
    the EFI partiton.

    I don't understand this step--why are you moving windows? Linux doesn't
    care where it is on the disk, so you should be able to just shrink the
    windows partition and continue from there.

    You might be able to do it all with one EFI partition.

    You generally can have multiple boot entries, each with its own path to
    a bootloader (.efi file) on the EFI partition. In some cases a (buggy)
    system will only boot from /EFI/BOOT/BOOTX64.EFI but that's rare these
    days.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Stone@21:1/5 to eben@gmx.us on Thu Dec 5 23:30:02 2024
    On Thu, Dec 05, 2024 at 04:06:17PM -0500, eben@gmx.us wrote:
    To find out if the motherboard imposed any limitations, I checked the
    manual. I found these tables, which I can't see the implications of:

    M2D_32G M.2 connector >+-------------+---------+---------+---------+---------+---------+---------+ >|\ Connector | SATA3_0 | SATA3_1 | SATA3_2 | SATA3_3 | SATA3_4 | SATA3_5 |
    + \----------\+---------+---------+---------+---------+---------+---------+
    | Type of SSD | SATA_Express | SATA_Express | - |

    Ah, SATA express (SATAe). That's a dead standard that never actually got implemented in a drive (as far as I know) but was included on
    motherboards for some time before it was clear that M.2 won and SATAe
    was a dead end. SATAe had the ability to use two SATA ports and an
    additonal connector to provide two PCIe lanes for a drive, so a single connector would have attached to one of the little ports as well as the
    two SATA ports beside it. Certain SATA channels on these motherboards
    were shared between the SATA ports and the M.2 SATA pins, and PCIe lanes
    were shared between some of the M.2 PCIe pins and the SATAe PCIe pins
    *and* some of the SATA contollers. The table is trying to explain which combinations won't work. E.g., if you use a SATA M.2 drive in M2D_32G
    you can't also attach a SATA drive to SATA3_3 or a SATAe drive to
    SATA3_2/3 (not that such a drive exists), and if use a PCIe x4 SSD in
    M2D_32G you can't use SATA3_0/1/2/3, and if you use the SATAe associated
    with SATA3_0/1 you drop the speed of M2D_32G to x2 instead of x4.

    You can ignore the dark lines because SATAe doesn't exist. If you only
    plug SATA disks into SATA3_4/5 you can do anything with the two M.2
    connectors. If you want more than two SATA disks you can put an NVMe
    drive into M2A_32G and nothing in M2D_32G and use any SATA port.
    Or you can use SATA3_0/1 with two NVMe drives but that will drop M2D_32G
    NVMe to x2 speed. If you use SATA3_0 you can't put a SATA M.2 drive into M2A_32G and if you use SATA3_3 you can't put a SATA M.2 drive into
    M2D_32G. Simple, right?

    :-D

    Most desktop motherboards have some sort of limitations/sharing like this because
    there are only so many PCIe lanes from the CPU, but they vary in how
    well they communicate the information.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charles Curley@21:1/5 to Andrew M.A. Cater on Thu Dec 5 23:40:01 2024
    On Thu, 5 Dec 2024 22:03:52 +0000
    "Andrew M.A. Cater" <amacater@einval.com> wrote:

    2. Use gparted to move Windows (maybe apart from the EFI partition)
    to the end of the drive - move the blank space to the front of the
    drive after the EFI partiton.

    OK, my curiosity is up. Why make a point of moving the Windows
    partition to the end of the drive?

    --
    Does anybody read signatures any more?

    https://charlescurley.com
    https://charlescurley.com/blog/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Fri Dec 6 00:00:02 2024
    Michael Stone composed on 2024-12-05 16:51 (UTC-0500):

    On Thu, Dec 05, 2024 at 16:16:53 -0500, Felix Miata wrote:

    At least one does. I provided URL to the one I use, for some definition of >>"automated", upthread @2024-12-05 12:24 (UTC-0500) in reply to your post 102 >>minutes earlier. :)

    Automated means something along the lines of "make this mbr disk into a working gpt disk at the push of a button". I dont see that your tool

    It's not "my" tool. I've only been a user of it, to the exclusion of all other partitioning tools, for over two decades.

    does that, but I'm also not interested in digging through the
    documentation in any detail.

    Here ya go (Pastebinit on Bookworm refuses to accept images.): https://paste.opensuse.org/a556a79e8015
    expires in 7 days, shows DFSee menu open for "Convert an MBR disk to GPT".
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From eben@gmx.us@21:1/5 to Michael Stone on Fri Dec 6 00:10:01 2024
    On 12/5/24 17:26, Michael Stone wrote:
    On Thu, Dec 05, 2024 at 04:06:17PM -0500, eben@gmx.us wrote:
    To find out if the motherboard imposed any limitations, I checked the
    manual. I found these tables, which I can't see the implications of:

    M2D_32G M.2 connector
    +-------------+---------+---------+---------+---------+---------+---------+ >> |\ Connector | SATA3_0 | SATA3_1 | SATA3_2 | SATA3_3 | SATA3_4 | SATA3_5 | >> | \----------\+---------+---------+---------+---------+---------+---------+ >> | Type of SSD | SATA_Express | SATA_Express | - |

    Ah, SATA express (SATAe). That's a dead standard that never actually got implemented in a drive (as far as I know) but was included on
    motherboards for some time before it was clear that M.2 won and SATAe
    was a dead end.

    The table is trying to explain which combinations won't work.

    You can ignore the dark lines because SATAe doesn't exist.

    Good that makes things simpler. Maybe I can find a way to disable it in the BIOS.

    I got a PCIe SATA card. Right now I'm using 1/4 of it for an optical drive, but if I should acquire an SSD that disables an important SATA port, the
    card may become more useful.

    Simple, right? :-D

    Yeah, I see myself doing a logic puzzle and losing quite a bit of hair if I
    add an SSD.

    Most desktop motherboards have some sort of limitations/sharing like this because there are only so many PCIe lanes from the CPU, but they vary in
    how well they communicate the information.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Wright@21:1/5 to Andrew M.A. Cater on Fri Dec 6 01:50:01 2024
    On Thu 05 Dec 2024 at 22:03:52 (+0000), Andrew M.A. Cater wrote:
    On Thu, Dec 05, 2024 at 03:15:36PM -0600, David Wright wrote:
    On Thu 05 Dec 2024 at 20:01:29 (+0000), Andrew M.A. Cater wrote:

    Use the Microsoft tools to create a Windows .iso file

    Install Windows from a .iso file. Use Windows drive tools to shrink Windows
    on the drive to make some space.

    Then use something like gparted to move the Windows to the end of the drive.

    Do you mean specifically the end of the drive,
    or just at one end or the other? Reasoning?


    1. Install Windows to the whole drive - it's what Windows does :)
    2. Use gparted to move Windows (maybe apart from the EFI partition) to the
    end of the drive - move the blank space to the front of the drive after
    the EFI partiton.
    3. Install Debian in the blank space.

    Last time I installed Debian on a Windows computer, I used W's own
    Disk Manager to defragment, optimise, and shrink the main Windows
    partition, which meant it was at the /start/ of the free space.
    (W's DM for peace of mind of the system's owner.)

    I may be repeating this fairly soon, and am interested about the
    difference between W at the start and W at the end of the drive,
    particular now it's solid state rather than a spinning disc.

    You might be able to do it all with one EFI partition. I think I found it easier to put Windows on first - because it's fussy, then install Debian
    but I may have done it both ways round in the past. Installing Windows
    second is definitely harder if I recall correctly.

    Yes, I'm used to Microsoft first, then Debian. In the days of MSDOS,
    this was essential on some of my disks, because DOS had to choose its
    preferred disk geometry, or it wouldn't work at all.

    Cheers,
    David.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andrew M.A. Cater@21:1/5 to Charles Curley on Fri Dec 6 19:20:01 2024
    On Thu, Dec 05, 2024 at 03:32:10PM -0700, Charles Curley wrote:
    On Thu, 5 Dec 2024 22:03:52 +0000
    "Andrew M.A. Cater" <amacater@einval.com> wrote:

    2. Use gparted to move Windows (maybe apart from the EFI partition)
    to the end of the drive - move the blank space to the front of the
    drive after the EFI partiton.

    OK, my curiosity is up. Why make a point of moving the Windows
    partition to the end of the drive?


    I seem to remember that it's significantly difficult to forecast the likely size you'll want for a Windows system to allow room for updates. I sized
    it at something like 70G and moved it to the end of the drive so that it
    didn't try to expand further into what Windows might regard as free space.
    (70G does allow room for a few Windows updates, all of which are larger
    than you think).

    I then used Debian to fill the blank space and re-used Microsoft's EFI partition.

    It's a while ago: I think since then I've virtualised Windows 11 on a kvm
    VM.

    All best, as ever,

    Andy
    (amacater@debian.org)

    --
    Does anybody read signatures any more?

    https://charlescurley.com
    https://charlescurley.com/blog/


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charles Curley@21:1/5 to Andrew M.A. Cater on Fri Dec 6 20:00:01 2024
    On Fri, 6 Dec 2024 18:18:27 +0000
    "Andrew M.A. Cater" <amacater@einval.com> wrote:


    OK, my curiosity is up. Why make a point of moving the Windows
    partition to the end of the drive?


    I seem to remember that it's significantly difficult to forecast the
    likely size you'll want for a Windows system to allow room for
    updates. I sized it at something like 70G and moved it to the end of
    the drive so that it didn't try to expand further into what Windows
    might regard as free space. (70G does allow room for a few Windows
    updates, all of which are larger than you think).

    Ah, that suggests that you leave room on your mass storage for later
    expansion of either Windows, Linux, or something completely unexpected.
    I don't, so I didn't consider the possibility. Thank you.


    I then used Debian to fill the blank space and re-used Microsoft's
    EFI partition.

    That's pretty much what I do.


    It's a while ago: I think since then I've virtualised Windows 11 on a
    kvm VM.

    Ah. The only reason I keep Windows around is that some computers come
    with an infestation of it. As long as I've paid for a license and have
    the mass storage to spare I'll keep it around.

    Thank you.

    --
    Does anybody read signatures any more?

    https://charlescurley.com
    https://charlescurley.com/blog/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anssi Saari@21:1/5 to Michael Stone on Wed Dec 11 09:00:01 2024
    Michael Stone <mstone@debian.org> writes:

    On Thu, Dec 05, 2024 at 10:55:48AM -0500, eben@gmx.us wrote:
    How do I tell how many lanes a given drive uses (preferably before purchase)?

    It would be buried in the technical docs. I've only seen 4x drives
    (but I'm sure there may be some cheaper drives with fewer).

    While we're on the topic PCIe lanes and SSDs, I've been looking into
    some way of usiing old NVME SSDs when they get replaced by bigger
    ones. I don't really want to have a stack of little m.2 USB boxes.

    There are some PCIe adapter boards that take two or more SSDs but what
    isn't clear to me is if those cards can work in the typically free x1
    PCIe slot, if the cards are x4 or x8 and drives are x4?

    Yes. Also not many drives can sustain a multi-gigabyte write rate
    anyway...

    I have to say I was quite disappointed when I cloned a 1TB SSD to a 2TB
    one, average speed wasn't much higher than writing to an HD. I don't
    remember what the target drive was though. Since I don't intend to make
    a habit of this, no big deal, but I wonder what kind of write speed one
    could expect in a sustained write of 1TB?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Ritter@21:1/5 to Anssi Saari on Wed Dec 11 14:30:01 2024
    Anssi Saari wrote:
    Yes. Also not many drives can sustain a multi-gigabyte write rate
    anyway...

    I have to say I was quite disappointed when I cloned a 1TB SSD to a 2TB
    one, average speed wasn't much higher than writing to an HD. I don't
    remember what the target drive was though. Since I don't intend to make
    a habit of this, no big deal, but I wonder what kind of write speed one
    could expect in a sustained write of 1TB?

    One of the tests that servethehome.com does in reviewing SSDs is the
    write speed after cache saturation: that is, once you have sent enough gigabytes in a row, what is the ongoing write speed?

    It is not unusual for a PCIe NVMe device to manage writing the first
    few gigabytes at 4000 MB/s... and then drop to 500, 200 or even 150 MB/s
    for the long haul.

    And for many workloads, that's completely reasonable. It doesn't
    help all that much for copying large filesystems.

    Note that spinning disks have improved transfer speeds in the last few
    years. 100-120 MB/s was all you could expect for more than a decade,
    but you can now find disks that will do 180-250 MB/s for large contiguous transfers.

    -dsr-

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Stone@21:1/5 to Anssi Saari on Wed Dec 11 14:20:01 2024
    On Wed, Dec 11, 2024 at 09:51:01AM +0200, Anssi Saari wrote:
    Michael Stone <mstone@debian.org> writes:

    On Thu, Dec 05, 2024 at 10:55:48AM -0500, eben@gmx.us wrote:
    How do I tell how many lanes a given drive uses (preferably before purchase)?

    It would be buried in the technical docs. I've only seen 4x drives
    (but I'm sure there may be some cheaper drives with fewer).

    While we're on the topic PCIe lanes and SSDs, I've been looking into
    some way of usiing old NVME SSDs when they get replaced by bigger
    ones. I don't really want to have a stack of little m.2 USB boxes.

    There are some PCIe adapter boards that take two or more SSDs but what
    isn't clear to me is if those cards can work in the typically free x1
    PCIe slot, if the cards are x4 or x8 and drives are x4?

    As a general matter PCIe devices can/will downgrade, so e.g., if you
    plug a x16 video card into a x1 slot it will just work, but at x1 speed.
    But... The first gotcha is that many "dual m.2" boards have one sata and
    one nvme slot, and are effectively single slot adapters if you're
    dealing with nvme drives. To have more than one nvme means one of two
    things: 1) a pcie switch chip 2) port bifurcation. Switch chips are
    expensive, and would probably make this exercise cost more than an old
    nvme drive is worth. Port bifurcation requires a certain number of
    physical lanes, typical would be a x8 card with x4 going to each of two
    m.2 slots: in that case, the second slot *will not* work if the adapter
    is plugged into a x1 slot. Check the documentation carefully to
    understand what each adapter does, and plan accordingly. To set
    expectations, a single nvme/pcie adapter costs a few bucks, a nvme+sata
    adapter a couple of bucks more, a dual nvme with a switch will cost over
    a hundred, and a dual bifurcated card maybe fifty to a hundred. There
    are cards that are way overpriced, but rarely are they underpriced--if
    you see a really good deal on a dual m.2, it's probably nvme+sata. Be
    warned: I've seen a lot of incompatibilities between various adapters & motherboards in this space. (Beyond the obvious issues with whether a motherboard supports bifurcation, the cards which have pcie switches are
    using functionality that's in the spec but not used all that much and
    not necessarily tested on a particular motherboard's implementation-- especially consumer motherboards which aren't expected to use anything
    other than a video card.)

    Yes. Also not many drives can sustain a multi-gigabyte write rate
    anyway...

    I have to say I was quite disappointed when I cloned a 1TB SSD to a 2TB
    one, average speed wasn't much higher than writing to an HD. I don't
    remember what the target drive was though. Since I don't intend to make
    a habit of this, no big deal, but I wonder what kind of write speed one
    could expect in a sustained write of 1TB?

    Depends absolutely on the drive. Assuming something fairly recent, a
    gigabyte or two per second is easy to obtain with a simple cp. (If
    you're copying large files; small files will run much slower.) A
    transfer to a hard drive will max out around 100-200MByte/s, and a sata
    SSD around 500-600MByte/s--which shouldn't be hard to exceed with an
    NVMe unless it's older/cheaper or being throttled by running in the
    wrong slot.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anssi Saari@21:1/5 to Dan Ritter on Thu Dec 12 20:20:01 2024
    Dan Ritter <dsr@randomstring.org> writes:

    One of the tests that servethehome.com does in reviewing SSDs is the
    write speed after cache saturation: that is, once you have sent enough gigabytes in a row, what is the ongoing write speed?

    Thanks, excellent info, I had no idea servethehome does that kind of benchmarks. I dug up the drive in question, it's a Kingston NV2. Bottom
    of the barrel cheap, there's a review at https://www.tomshardware.com/reviews/kingston-nv2-ssd/ where they say
    sustained write speed is about 240 MB/s. That's about what I got, if
    memory serves.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anssi Saari@21:1/5 to Michael Stone on Thu Dec 12 22:20:01 2024
    Michael Stone <mstone@debian.org> writes:

    As a general matter PCIe devices can/will downgrade...

    Thanks for the comprehensive reply. Indeed, those single drive adapters
    are dirt cheap so that's a "why not" buy.

    It turns out I actually have a free x4 slot and dual SSD adapters using
    ASM2812 bridges seem pretty cheap on eBay and Aliexpress, around 60
    euros and free shipping. I have no idea if they'd work on my basic
    consumer motherboard (Asrock B550 Extreme4) but I guess I'll give one of
    those a try. Quad adapters seem to be rarer and more than twice the
    price.

    I have another computer with a free x16 slot but the chipset has no
    bifurcation support so the common quad SSD boards from Asus for example
    are out of the question.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)