I want to clone the whole system 1 to 1 to the new NVME.
Will the UUID change at clone, even when the partitions are not changed in a bit of size? IMHO the UUID will not change, but I am not quite sure.
Depends on what you mean by "clone". If you mean a bit-for-bit copyI mean clone bit by bit. The software I am using is "Clonezilla" which depends on partclone and dd.
using dd or an equivalent, then you're correct. The file system UUID
will be copied along with all the other bits of the old file system.
If you mean "create a new file system on the new drive, then rsyncNo, not rsync. This would be an option, but only if the above method is
the files over", then the file system UUID will not be the same. Unless
of course you specifically go out of your way to copy the UUID as well.
Hi folks,
as my old notebook died, I ntend to buy a new notebook.
The old one has got a SSD drive, the new one an NVME.
I want to clone the whole system 1 to 1 to the new NVME.
In my /etc/fstab I am using UUID entries instead of /dev/sdX.
The new one then would have /dev/nvme* as entries (that is clear), but if I am
using only UUID, the question:
Will the UUID change at clone, even when the partitions are not changed in a bit of size? IMHO the UUID will not change, but I am not quite sure.
When cloning from SSD to SSD this is working, but I have no experience when cloning from SSD to NVME.
Thanks for a short feedback.
Best
Hans
Hi folks,
as my old notebook died, I ntend to buy a new notebook.
The old one has got a SSD drive, the new one an NVME.
I want to clone the whole system 1 to 1 to the new NVME.
In my /etc/fstab I am using UUID entries instead of /dev/sdX.
The new one then would have /dev/nvme* as entries (that is clear), but if I am
using only UUID, the question:
Will the UUID change at clone, even when the partitions are not changed in a bit of size? IMHO the UUID will not change, but I am not quite sure.
When cloning from SSD to SSD this is working, but I have no experience when cloning from SSD to NVME.
Thanks for a short feedback.
Best
Hans
as my old notebook died, I ntend to buy a new notebook.
The old one has got a SSD drive, the new one an NVME.
I want to clone the whole system 1 to 1 to the new NVME.
In my /etc/fstab I am using UUID entries instead of /dev/sdX.
The new one then would have /dev/nvme* as entries (that is clear), but if I am
using only UUID, the question:
Will the UUID change at clone, even when the partitions are not changed in a bit of size? IMHO the UUID will not change, but I am not quite sure.
When cloning from SSD to SSD this is working, but I have no experience when cloning from SSD to NVME.
In my /etc/fstab I am using UUID entries instead of /dev/sdX.I would recommend changing from UUID to labels. Doing so, all you need
The new one then would have /dev/nvme* as entries (that is clear), but if I am
using only UUID, the question:
to worry is that the new partitions have the same labels as the old
ones.
https://wiki.debian.org/fstab#Labels
On a side note, last time I tried to install Debian on NVME, it
wouldn't even find the storage device. I hope this has improved since
then.
In 2019 I installed debian on nvme without any problem...
If you simply clone the system from one hardware system to another, areYes.
you confident that it will work?
I expect that the two different hardware systems would require separateNope, kernel knows.
sets of drivers and configurations for those drivers.
Also, depending on the operating system and packages versions, you could
end up with a frankenstein system.
Will the two primary drives be the same, in terms of total hard drive capacity, partition sizes and formatted/usable capacities?
Will the UEFI partitions on each system, be compatible?
It seems to me, to be making a mess.
I believe (and, I am no expert, and, this list will have much more knowledgeable people than me, available) that it would be simpler, to
install the latest versions and packages of whatever you have/had on
your older system, on your new system, and, then create your partitions,
and copy data to corresponding partitions.
What you are intending to do, reminds me of a movie that I once watched, named Pet Semetary (sic).
..
Bret Busby
Armadale
West Australia
(UTC+0800)
..............
Hi folks,
as my old notebook died, I ntend to buy a new notebook.
The old one has got a SSD drive, the new one an NVME.
I want to clone the whole system 1 to 1 to the new NVME.
In my /etc/fstab I am using UUID entries instead of /dev/sdX.
The new one then would have /dev/nvme* as entries (that is clear), but if I am
using only UUID, the question:
Will the UUID change at clone, even when the partitions are not changed in a bit of size? IMHO the UUID will not change, but I am not quite sure.
When cloning from SSD to SSD this is working, but I have no experience when cloning from SSD to NVME.
Thanks for a short feedback.
Best
Hans
If you simply clone the system from one hardware system to another, are you confident that it will work?
I expect that the two different hardware systems would require separate sets of drivers and configurations for those drivers.
Also, depending on the operating system and packages versions, you could end up with a frankenstein system.
Will the two primary drives be the same, in terms of total hard drive capacity, partition sizes and formatted/usable capacities?
Will the UEFI partitions on each system, be compatible?
I believe (and, I am no expert, and, this list will have much more knowledgeable people than me, available) that it would be simpler, to
install the latest versions and packages of whatever you have/had on your older system, on your new system, and, then create your partitions, and copy data to corresponding partitions.
What you are intending to do, reminds me of a movie that I once watched, named Pet Semetary (sic).
Yes, I read in other debian threads abnout Labels. What is the advantage of Labels to UUID? I alwaqys thought, labels can be easily changed and then at boot, linux would mount some other partition with the same label.
But it will be rather difficult, to create a partition with the same UUID (but
other size and content) of an existent (except of cloning, of course).
Using labels seem to be rather unsecure in my opinion.
Yes, I read in other debian threads abnout Labels. What is the advantage of Labels to UUID?
I alwaqys thought, labels can be easily changed and then at
boot, linux would mount some other partition with the same label.
But it will be rather difficult, to create a partition with the same UUID (but
other size and content) of an existent (except of cloning, of course).
Using labels seem to be rather unsecure in my opinion.
Hi,
[ Beware not making clear that you mean FILESYSTEM labels and UUIDs
in this thread. It's been a week since we've had massive
misunderstanding of what filesystem UUIDs are and every mention of
UUID or LABEL without that context risks invoking a very confused
person who is prepared to write 100 emails on the subject. ]
That, what i understand as label is the name, I give a partition. For example,
in gparted, I can give a partition a label like I want. For example, my Windows partition can get a label like "windows", "win11", "shitty_windows" or
whatever, or my datapartition maybe labelled "space1".
Sent: Monday, December 02, 2024 at 2:40 PM
From: "Andrew M.A. Cater" <amacater@einval.com>
To: debian-user@lists.debian.org
Subject: Re: From SSD to NVME
On Mon, Dec 02, 2024 at 05:49:18PM +0100, Hans wrote:
Hi folks,
as my old notebook died, I ntend to buy a new notebook.
The old one has got a SSD drive, the new one an NVME.
I want to clone the whole system 1 to 1 to the new NVME.
It might be easier to produce a clean new install and then just rsync
data from the SSD drive to the appropriate directories on the NVME.
In my /etc/fstab I am using UUID entries instead of /dev/sdX.
The new one then would have /dev/nvme* as entries (that is clear), but if I am
using only UUID, the question:
Will the UUID change at clone, even when the partitions are not changed in a
bit of size? IMHO the UUID will not change, but I am not quite sure.
I'm fairly sure this was brought up just about at the end of last month.
When cloning from SSD to SSD this is working, but I have no experience when cloning from SSD to NVME.
Hoping to keep partition sizes etc. identical across drives is hard so it does seem easier to just copy data from one drive to the other.
Hi folks,
as my old notebook died, I ntend to buy a new notebook.
The old one has got a SSD drive, the new one an NVME.
I want to clone the whole system 1 to 1 to the new NVME.
In my /etc/fstab I am using UUID entries instead of /dev/sdX.
The new one then would have /dev/nvme* as entries (that is clear), but if I am
using only UUID, the question:
Will the UUID change at clone, even when the partitions are not changed in a bit of size? IMHO the UUID will not change, but I am not quite sure.
When cloning from SSD to SSD this is working, but I have no experience when cloning from SSD to NVME.
Thanks for a short feedback.
Best
Hans
<snip>which depends
I mean clone bit by bit. The software I am using is "Clonezilla"
on partclone and dd.
<snip>
No, not rsync. This would be an option, but only if the above method is failing (i.e. target drive is smaller than source drive).
Thank you all for your response./usr,
Just to explain: I have only "standard" partitions. One for /boot, /,
/var and /home. Most of them are luks encrypted.(means, first
This cloning I did often ovetr the years. My debian is rather old
install years ago, but it was of course upgraded) and during the years, I cloned it from mechanical harddrive to SSD, then to a bigger SSD andso on.
This worked well and without any issues using clonezilla, resizing with gparted and resize2fs intelligently.well and
Although, first it was a change from /dev/hdaX to /dev/sdaX, this was
easlily done until I changed to UUID. Even with this, the cloning worked perfectly without any flaws.about it.
But /dev/hda and /dev /sda are very similar, except of the naming scheme.
But I never used NVME drives before and know (shame on me!) not much
If NVME are only super fast SSD's, then it will be easy, but if NVMEare a
complete alien hardware, then I might come in trouble (Nothing, thatcan not
be fixed!).do and
So I asked here, maybe someone did the already the same, I intend to
could give me some clues.
In the next days I get my new notebook and will report of my success.
Maybe it will be helpfull for other people, too.
Yes, I read in other debian threads abnout Labels. What is theadvantage of
Labels to UUID? I alwaqys thought, labels can be easily changed andthen at
boot, linux would mount some other partition with the same label.UUID (but
But it will be rather difficult, to create a partition with the same
other size and content) of an existent (except of cloning, of course).
Using labels seem to be rather unsecure in my opinion.
Hi,
On Mon, Dec 02, 2024 at 09:47:05PM +0100, Hans wrote:
That, what i understand as label is the name, I give a partition. For example,
in gparted, I can give a partition a label like I want. For example, my Windows partition can get a label like "windows", "win11", "shitty_windows" or
whatever, or my datapartition maybe labelled "space1".
Yeah, so, already we are off in the weeds. 🙁 But in that case I'm
glad I said something!
It might be easier to produce a clean new install and then just rsync
data from the SSD drive to the appropriate directories on the NVME.
No it is better that everything comes over all at one time
I'm fairly sure this was brought up just about at the end of last month.
It depends upon if you created a partition table, partitions and filesystems on the drive.
I create the drive layout on the drive then rsync the old drive to the new drive.
Then I fixup the PARTUUID in the /etc/fstab and boot loader.
If I am using Archlinux or my own custom build os I have a blank /etc/fstab and /etc/hosts
cat /etc/fstab
# Static information about the filesystems.
# See fstab(5) for details.
# <file system> <dir> <type> <options> <dump> <pass>
cat /etc/hosts
# Static table lookup for hostnames.
# See hosts(5) for details.
[alarm@alarm ~]$ blkid
/dev/nvme0n1p1: LABEL_FATBOOT="bootfs" LABEL="bootfs" UUID="5A88-04BC" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="b2c58878-01"
/dev/nvme0n1p2: LABEL="rootfs" UUID="5170097f-f1f6-42d8-a2ff-8938cbdfa7be" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="b2c58878-02"
Hoping to keep partition sizes etc. identical across drives is hard so it does seem easier to just copy data from one drive to the other.
dd is your friend
https://www.howtoforge.com/linux-dd-command-clone-disk-practical-example/ https://thelinuxcode.com/clone-disk-using-dd-linux/
--
Hindi madali ang maging ako
Sent: Tuesday, December 03, 2024 at 3:52 AM
From: "Andrew M.A. Cater" <amacater@einval.com>
To: debian-user@lists.debian.org
Subject: Re: From SSD to NVME
On Tue, Dec 03, 2024 at 02:55:07AM +0100, pocket@homemail.com wrote:
It might be easier to produce a clean new install and then just rsync data from the SSD drive to the appropriate directories on the NVME.
No it is better that everything comes over all at one time
As someone else has put it elsewhere in the thread: new laptop means
new drivers, potentially moving from legacy MBR to UEFI ... easier in
many ways to put a clean install of Debian on from new media to start
with (also wiping out whatever was there before if it came preinstalled
with Windows or whatever).
I'm fairly sure this was brought up just about at the end of last month.
It depends upon if you created a partition table, partitions and filesystems on the drive.
I create the drive layout on the drive then rsync the old drive to the new drive.
Then I fixup the PARTUUID in the /etc/fstab and boot loader.
If I am using Archlinux or my own custom build os I have a blank /etc/fstab and /etc/hosts
There's more than one way to do it: if you absolutely know what partition sizes you want, maybe - LVM and one partition is a fairly sensible starting point because partitions will grow and shrink, for example.
cat /etc/fstab
# Static information about the filesystems.
# See fstab(5) for details.
# <file system> <dir> <type> <options> <dump> <pass>
cat /etc/hosts
# Static table lookup for hostnames.
# See hosts(5) for details.
[alarm@alarm ~]$ blkid
/dev/nvme0n1p1: LABEL_FATBOOT="bootfs" LABEL="bootfs" UUID="5A88-04BC" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="b2c58878-01"
/dev/nvme0n1p2: LABEL="rootfs" UUID="5170097f-f1f6-42d8-a2ff-8938cbdfa7be" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="b2c58878-02"
Hoping to keep partition sizes etc. identical across drives is hard so it does seem easier to just copy data from one drive to the other.
dd is your friend
https://www.howtoforge.com/linux-dd-command-clone-disk-practical-example/ https://thelinuxcode.com/clone-disk-using-dd-linux/
dd is your friend if you know _exactly_ what you are doing :)
As ever, the right way is what works for your requirements: sometimes
people need something straightforward to get them started. Making
work for yourself at the outset needs to be justified by saving time
later on, perhaps.
All the very best, as ever,
Andy
(amacater@debian.org)
When people state the above it really just shows they don't understand Linux.
[alarm@alarm ~]$ ls -l /
lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
drwxr-xr-x 17 root root 3980 Dec 2 10:38 dev
drwxr-xr-x 52 root root 4096 Dec 2 20:58 etc
drwxr-xr-x 3 root root 4096 Aug 19 03:43 home
lrwxrwxrwx 1 root root 7 Nov 25 19:15 lib -> usr/lib
drwx------ 2 root root 16384 May 15 2024 lost+found
drwxr-xr-x 2 root root 4096 Sep 14 12:01 mnt
drwxr-xr-x 2 root root 4096 Apr 7 2024 opt
dr-xr-xr-x 247 root root 0 Dec 31 1969 proc
drwxr-x--- 6 root root 4096 Dec 2 13:06 root
drwxr-xr-x 22 root root 640 Dec 2 10:38 run
lrwxrwxrwx 1 root root 7 Nov 25 19:15 sbin -> usr/bin
drwxr-xr-x 36 root alarm 4096 Nov 29 12:42 srv
dr-xr-xr-x 12 root root 0 Dec 31 1969 sys
drwxrwxrwt 13 root root 260 Dec 3 00:00 tmp
drwxr-xr-x 8 root root 4096 Dec 2 20:58 usr
drwxr-xr-x 13 root root 4096 Nov 30 17:21 var
Notice sbin is a symlink to /usr/bin
Why hasn't debian done so?
Sent: Tuesday, December 03, 2024 at 7:15 AM
From: "Greg Wooledge" <greg@wooledge.org>
To: debian-user@lists.debian.org
Subject: Re: From SSD to NVME
On Tue, Dec 03, 2024 at 12:01:15 +0100, pocket@homemail.com wrote:
[alarm@alarm ~]$ ls -l /
lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
drwxr-xr-x 17 root root 3980 Dec 2 10:38 dev
drwxr-xr-x 52 root root 4096 Dec 2 20:58 etc
drwxr-xr-x 3 root root 4096 Aug 19 03:43 home
lrwxrwxrwx 1 root root 7 Nov 25 19:15 lib -> usr/lib
drwx------ 2 root root 16384 May 15 2024 lost+found
drwxr-xr-x 2 root root 4096 Sep 14 12:01 mnt
drwxr-xr-x 2 root root 4096 Apr 7 2024 opt
dr-xr-xr-x 247 root root 0 Dec 31 1969 proc
drwxr-x--- 6 root root 4096 Dec 2 13:06 root
drwxr-xr-x 22 root root 640 Dec 2 10:38 run
lrwxrwxrwx 1 root root 7 Nov 25 19:15 sbin -> usr/bin
drwxr-xr-x 36 root alarm 4096 Nov 29 12:42 srv
dr-xr-xr-x 12 root root 0 Dec 31 1969 sys
drwxrwxrwt 13 root root 260 Dec 3 00:00 tmp
drwxr-xr-x 8 root root 4096 Dec 2 20:58 usr
drwxr-xr-x 13 root root 4096 Nov 30 17:21 var
Notice sbin is a symlink to /usr/bin
That's not how Debian 12 has it.
hobbit:~$ ls -ld /sbin /bin
lrwxrwxrwx 1 root root 7 Feb 17 2024 /bin -> usr/bin/
lrwxrwxrwx 1 root root 8 Feb 17 2024 /sbin -> usr/sbin/
If Trixie has done an "sbin merge", it's news to me.
Sent: Tuesday, December 03, 2024 at 7:50 AM
From: "Nicolas George" <george@nsup.org>
To: debian-user@lists.debian.org
Subject: Re: From SSD to NVME
pocket@homemail.com (12024-12-03):
Why hasn't debian done so?
Because polluting the completion namespace with commands useful once in
a blue moon for administrators is a stupid idea.
What namespace would that be
Sent: Tuesday, December 03, 2024 at 8:22 AM
From: "Nicolas George" <george@nsup.org>
To: debian-user@lists.debian.org
Subject: Re: From SSD to NVME
pocket@homemail.com (12024-12-03):
What namespace would that be
I just said it: the namespace for completion.
--
Nicolas George
As someone else has put it elsewhere in the thread: new laptop means
new drivers, potentially moving from legacy MBR to UEFI ... easier in
many ways to put a clean install of Debian on from new media to start
with (also wiping out whatever was there before if it came preinstalled
with Windows or whatever).
None to little of that is relevant. The "drivers" are part of the kernel
On Tue, Dec 03, 2024 at 12:01:15 +0100, pocket wrote:
[alarm@alarm ~]$ ls -l /
lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
drwxr-xr-x 17 root root 3980 Dec 2 10:38 dev
drwxr-xr-x 52 root root 4096 Dec 2 20:58 etc
drwxr-xr-x 3 root root 4096 Aug 19 03:43 home
lrwxrwxrwx 1 root root 7 Nov 25 19:15 lib -> usr/lib
drwx------ 2 root root 16384 May 15 2024 lost+found
drwxr-xr-x 2 root root 4096 Sep 14 12:01 mnt
drwxr-xr-x 2 root root 4096 Apr 7 2024 opt
dr-xr-xr-x 247 root root 0 Dec 31 1969 proc
drwxr-x--- 6 root root 4096 Dec 2 13:06 root
drwxr-xr-x 22 root root 640 Dec 2 10:38 run
lrwxrwxrwx 1 root root 7 Nov 25 19:15 sbin -> usr/bin
drwxr-xr-x 36 root alarm 4096 Nov 29 12:42 srv
dr-xr-xr-x 12 root root 0 Dec 31 1969 sys
drwxrwxrwt 13 root root 260 Dec 3 00:00 tmp
drwxr-xr-x 8 root root 4096 Dec 2 20:58 usr
drwxr-xr-x 13 root root 4096 Nov 30 17:21 var
Notice sbin is a symlink to /usr/bin
That's not how Debian 12 has it.
hobbit:~$ ls -ld /sbin /bin
lrwxrwxrwx 1 root root 7 Feb 17 2024 /bin -> usr/bin/
lrwxrwxrwx 1 root root 8 Feb 17 2024 /sbin -> usr/sbin/
If Trixie has done an "sbin merge", it's news to me.
[alarm@alarm ~]$ pacman -Q|grep bash
bash 5.2.037-1
dpkg -l|grep bash
ii bash 5.2.15-2+b7 arm64 GNU Bourne Again SHell
How did that happen?
Sent: Tuesday, December 03, 2024 at 9:24 AM
From: "Felix Miata" <mrmazda@stanis.net>
To: pocket@homemail.com, debian-user@lists.debian.org
Subject: Re: From SSD to NVME
pocket@homemail.com composed on 2024-12-03 12:01 (UTC+0100):
As someone else has put it elsewhere in the thread: new laptop means
new drivers, potentially moving from legacy MBR to UEFI ... easier in
many ways to put a clean install of Debian on from new media to start
with (also wiping out whatever was there before if it came preinstalled
with Windows or whatever).
None to little of that is relevant. The "drivers" are part of the kernel
Sort of:
# inxi -Sd
System:
Host: ab250 Kernel: 6.1.0-25-amd64 arch: x86_64 bits: 64
Console: pty pts/0 Distro: Debian GNU/Linux 12 (bookworm)
Drives:
Local Storage: total: 476.94 GiB used: 60.56 GiB (12.7%)
ID-1: /dev/nvme0n1 vendor: Patriot model: M.2 P300 512GB size: 476.94 GiB …
# lsinitramfs /boot/initrd.img-6.1.0-25-amd64 | egrep 'nvme|ata|ahci|piix' usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme-core.ko usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme.ko usr/lib/udev/ata_id
usr/bin/fatattr
#
--
Evolution as taught in public schools is, like religion,
based on faith, not based on science.
Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!
Felix Miata
From: "Felix Miata"
pocket composed on 2024-12-03 12:01 (UTC+0100):
The "drivers" are part of the kernel
Sort of:
# inxi -Sd
System:
Host: ab250 Kernel: 6.1.0-25-amd64 arch: x86_64 bits: 64
Console: pty pts/0 Distro: Debian GNU/Linux 12 (bookworm)
Drives:
Local Storage: total: 476.94 GiB used: 60.56 GiB (12.7%)
ID-1: /dev/nvme0n1 vendor: Patriot model: M.2 P300 512GB size: 476.94 GiB >> …
# lsinitramfs /boot/initrd.img-6.1.0-25-amd64 | egrep 'nvme|ata|ahci|piix' >> usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme
usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host
usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme-core.ko
usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme.ko
usr/lib/udev/ata_id
usr/bin/fatattr
#
pocket@pocket:~ $ lsinitramfs /boot/initrd.img-6.6.62 | grep -E 'nvme|ata|ahci|piix'
usr/lib/modules/6.6.62/kernel/drivers/ata usr/lib/modules/6.6.62/kernel/drivers/ata/ahci.ko usr/lib/modules/6.6.62/kernel/drivers/ata/libahci.ko usr/lib/modules/6.6.62/kernel/drivers/ata/libata.ko usr/lib/modules/6.6.62/kernel/drivers/ata/sata_mv.ko usr/lib/modules/6.6.62/kernel/drivers/usb/storage/ums-datafab.ko usr/lib/udev/ata_id
usr/bin/fatattr
root@pockey:~# nvme list
Node Generic SN Model Namespace Usage Format FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 /dev/ng0n1 A7SEB339046L5H Corsair MP600 CORE MINI 1 1.00 TB / 1.00 TB 512 B + 0 B ELFMC1.0
Oh my!
The root system is on an nvme drive...........
On a more somber note:
egrep is depreciated
egrep is now grep -E
https://itsfoss.com/deprecated-linux-commands/
https://www.redhat.com/en/blog/deprecated-linux-command-replacements
Sent: Tuesday, December 03, 2024 at 10:07 AM
From: "Felix Miata" <mrmazda@stanis.net>
To: debian-user@lists.debian.org, pocket@columbus.rr.com
Subject: Re: From SSD to NVME
pocket composed on 2024-12-03 09:40 (UTC-0500):
From: "Felix Miata"
pocket composed on 2024-12-03 12:01 (UTC+0100):
The "drivers" are part of the kernel
Sort of:
# inxi -Sd
System:
Host: ab250 Kernel: 6.1.0-25-amd64 arch: x86_64 bits: 64
Console: pty pts/0 Distro: Debian GNU/Linux 12 (bookworm)
Drives:
Local Storage: total: 476.94 GiB used: 60.56 GiB (12.7%)
ID-1: /dev/nvme0n1 vendor: Patriot model: M.2 P300 512GB size: 476.94 GiB
…
# lsinitramfs /boot/initrd.img-6.1.0-25-amd64 | egrep 'nvme|ata|ahci|piix' >> usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme
usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host
usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme-core.ko
usr/lib/modules/6.1.0-25-amd64/kernel/drivers/nvme/host/nvme.ko
usr/lib/udev/ata_id
usr/bin/fatattr
#
Note absence of *ata, ahci, piix & RAID. It's a system with NVME only. Without
NVME modules, there is no booting the system using a stock Debian kernel.
pocket@pocket:~ $ lsinitramfs /boot/initrd.img-6.6.62 | grep -E 'nvme|ata|ahci|piix'
usr/lib/modules/6.6.62/kernel/drivers/ata usr/lib/modules/6.6.62/kernel/drivers/ata/ahci.ko usr/lib/modules/6.6.62/kernel/drivers/ata/libahci.ko usr/lib/modules/6.6.62/kernel/drivers/ata/libata.ko usr/lib/modules/6.6.62/kernel/drivers/ata/sata_mv.ko usr/lib/modules/6.6.62/kernel/drivers/usb/storage/ums-datafab.ko usr/lib/udev/ata_id
usr/bin/fatattr
Greg Wooledge composed on 2024-12-03 07:15 (UTC-0500):
On Tue, Dec 03, 2024 at 12:01:15 +0100, pocket wrote:
lrwxrwxrwx 1 root root 7 Nov 25 19:15 sbin -> usr/bin
That's not how Debian 12 has it.
hobbit:~$ ls -ld /sbin /bin
lrwxrwxrwx 1 root root 7 Feb 17 2024 /bin -> usr/bin/
lrwxrwxrwx 1 root root 8 Feb 17 2024 /sbin -> usr/sbin/
If Trixie has done an "sbin merge", it's news to me.
Bookworm looks to be one of the last of the Mohicans:
PRETTY_NAME="Debian GNU/Linux trixie/sid"
lrwxrwxrwx 1 8 Oct 16 2022 sbin -> usr/sbin
PRETTY_NAME="Ubuntu 24.04.1 LTS"
lrwxrwxrwx 1 8 Dec 8 2023 sbin -> usr/sbin
drwxr-xr-x 2 1.0K Feb 2 2024 sbin.usr-is-merged
PRETTY_NAME="KDE neon 6.2"
lrwxrwxrwx 1 8 Jun 2 2024 sbin -> usr/sbin
drwxr-xr-x 2 1.0K Aug 20 2022 sbin.usr-is-merged
PRETTY_NAME="openSUSE Tumbleweed"
lrwxrwxrwx 1 8 Sep 25 15:10 sbin -> usr/sbin
PRETTY_NAME="Fedora Linux 39 (Thirty Nine)"
lrwxrwxrwx 1 8 Jul 20 2023 sbin -> usr/sbin
pocket's system is the outlier here. It's the only one where there
isn't a separate usr/sbin.
On Tue, Dec 03, 2024 at 14:31:14 -0500, Greg Wooledge wrote:
pocket's system is the outlier here. It's the only one where there
isn't a separate usr/sbin.
For some reason pocket keeps telling us on a Debian list things about
their Arch Linux system (actually).
[alarm@alarm ~]$ ls -l /…
lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
"Andrew M.A. Cater" <amacater@einval.com> wrote:
As someone else has put it elsewhere in the thread: new laptop means
new drivers, potentially moving from legacy MBR to UEFI ... easier in
many ways to put a clean install of Debian on from new media to start
with (also wiping out whatever was there before if it came preinstalled
with Windows or whatever).
None to little of that is relevant. The "drivers" are part of the kernel, it is
not 1995 anymore.
I go from MBR to GPT to UEFI all the time.
Is it nice to think that debian still has the microsoft mindset?
A "clean" install is not really required on a modern linux system.
Linux is not microsoft windows.
When people state the above it really just shows they don't understand Linux.
on the drive.I'm fairly sure this was brought up just about at the end of last month. >> >It depends upon if you created a partition table, partitions and filesystems
and /etc/hosts
I create the drive layout on the drive then rsync the old drive to the new drive.
Then I fixup the PARTUUID in the /etc/fstab and boot loader.
If I am using Archlinux or my own custom build os I have a blank /etc/fstab
There's more than one way to do it: if you absolutely know what partition
sizes you want, maybe - LVM and one partition is a fairly sensible starting >> point because partitions will grow and shrink, for example.
Nonsense, using/building distributions and running Linux since 1995, partitions
don't grow and shrink.
Sent: Tuesday, December 03, 2024 at 3:30 PM
From: "Felix Miata" <mrmazda@stanis.net>
To: debian-user@lists.debian.org
Subject: Re: From SSD to NVME
Andy Smith composed on 2024-12-03 19:48 (UTC):
On Tue, Dec 03, 2024 at 14:31:14 -0500, Greg Wooledge wrote:
pocket's system is the outlier here. It's the only one where there
isn't a separate usr/sbin.
For some reason pocket keeps telling us on a Debian list things about
their Arch Linux system (actually).
I've been trying to do too many different things at once today. I missed that (and
more):
pocket composed on 2024-12-03 12:01 (UTC+0100):
[alarm@alarm ~]$ ls -l /…
lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?
From: "Felix Miata" <mrmazda@stanis.net>
What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?
Mine for one.......
The install process allows one to setup the disk layout as you like or did I miss something?
pocket composed on 2024-12-03 12:01 (UTC+0100):
[alarm@alarm ~]$ ls -l /…
lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?
Mine for one.......
The install process allows one to setup the disk layout as you like or did I miss something?
Doesn't the manual/book suggest that you can create the partition layout and filesystem as you would like?
Sent: Tuesday, December 03, 2024 at 4:27 PM
From: "Greg Wooledge" <greg@wooledge.org>
To: debian-user@lists.debian.org
Subject: Re: From SSD to NVME
On Tue, Dec 03, 2024 at 22:13:36 +0100, pocket@homemail.com wrote:
From: "Felix Miata" <mrmazda@stanis.net>
What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?
Mine for one.......
Which version of Debian was that, when you installed?
Or was it Arch?
The install process allows one to setup the disk layout as you like or did I miss something?
Yes, but I think the *implication* was that this was something the
installer had done, either by default, or without extensive tinkering.
In my experience, Debian does not create a FAT file system for anything except the EFI partition. At least, not by default.
Sent: Tuesday, December 03, 2024 at 5:07 PM
From: "Greg Wooledge" <greg@wooledge.org>
To: debian-user@lists.debian.org
Subject: Re: From SSD to NVME
On Tue, Dec 03, 2024 at 22:50:42 +0100, pocket@homemail.com wrote:
Doesn't the manual/book suggest that you can create the partition layout and filesystem as you would like?
Why all the double-speak and vagueness?
Did you manually create a FAT file system, and tell the installer to
mount that as /boot? If that's what you did, fine, so be it, but why
are you acting like it's something *Debian* did? This tangent of this
thread has gone on far too long as everying keeps trying to guess what
you're actually saying.
Sent: Tuesday, December 03, 2024 at 5:10 PM
From: "Felix Miata" <mrmazda@stanis.net>
To: debian-user@lists.debian.org
Subject: Re: From SSD to NVME
pocket composed on 2024-12-03 22:53 (UTC+0100):
From: "Felix Miata" <mrmazda@stanis.net>
pocket composed on 2024-12-03 22:13 (UTC+0100):
pocket composed on 2024-12-03 12:01 (UTC+0100):
[alarm@alarm ~]$ ls -l /…
lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?
Mine for one.......
Keeping your bootloader of choice a secret?
Nope, a bootloader is a bootloader is a bootloader
Except one that doesn't load any bootloader files, kernel or initrd from /boot/,
which is what one or more others than Grub* are reputedly doing.
From one of the links
From: "Felix Miata" <mrmazda@stanis.net>
pocket composed on 2024-12-03 22:13 (UTC+0100):
pocket composed on 2024-12-03 12:01 (UTC+0100):
[alarm@alarm ~]$ ls -l /…
lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?
Mine for one.......
Keeping your bootloader of choice a secret?
Nope, a bootloader is a bootloader is a bootloader
[alarm@alarm ~]$ ls -l /…
lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot configuration?
Mine for one.......
Keeping your bootloader of choice a secret?
Nope, a bootloader is a bootloader is a bootloader
Except one that doesn't load any bootloader files, kernel or initrd from /boot/,
which is what one or more others than Grub* are reputedly doing.
I miss your point, the boot loader loads the kernel then control passes to systemd/init/bash
pocket composed on 2024-12-03 12:01 (UTC+0100):
[alarm@alarm ~]$ ls -l /…
lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot
configuration?
/boot/efi is a fat partition. It has to be fat so the UEFI can read the files. Usually /boot is an EXT partition.
Sent: Tuesday, December 03, 2024 at 11:18 PM
From: "Felix Miata" <mrmazda@stanis.net>
To: debian-user@lists.debian.org, "Timothy M Butterworth" <timothy.m.butterworth@gmail.com>
Subject: Re: From SSD to NVME
Timothy M Butterworth composed on 2024-12-03 20:36 (UTC-0500):
pocket composed on 2024-12-03 12:01 (UTC+0100):
[alarm@alarm ~]$ ls -l /…
lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
The rest of what the above was clipped from is in: https://lists.debian.org/debian-user/2024/12/msg00120.html
What Debian puts a FAT filesystem on /boot/? Is that a systemd-boot
configuration?
/boot/efi is a fat partition. It has to be fat so the UEFI can read the files. Usually /boot is an EXT partition.
/boot/efi/ is where the ESP normally goes, not /boot/, at least, not when using
Grub2 EFI, as opposed to one of those newfangled bootloaders (e.g. systemd-boot)
that I have yet to see live in person. That 'ls -l /' listing is pocket's root
directory showing Dec 31 1969. That means there's a FAT filesystem mounted on /boot/. He hasn't shown us what if anything is mounted on on /boot/efi/.
What I expect to see with Grub2 EFI is what I see here:
# ls -gGd /boot/
dr-xr-xr-x 4 10240 Dec 3 11:57 /boot/ # typical mountpoint EXT4 mounted
# ls -gGd /boot/efi/
drwxr-xr-x 4 4096 Dec 31 1969 /boot/efi/ # typical mountpoint FAT mounted
# mount | grep boot
/dev/sda1 on /boot/efi type vfat…
#
Sent: Tuesday, December 03, 2024 at 11:18 PM
From: "Felix Miata" <mrmazda@stanis.net>
To: debian-user@lists.debian.org, "Timothy M Butterworth" <timothy.m.butterworth@gmail.com> Subject: Re: From SSD to NVME
Timothy M Butterworth composed on 2024-12-03 20:36 (UTC-0500):
pocket composed on 2024-12-03 12:01 (UTC+0100):
[alarm@alarm ~]$ ls -l /…
lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
The rest of what the above was clipped from is in: https://lists.debian.org/debian-user/2024/12/msg00120.html
What Debian puts a FAT filesystem on /boot/? Is that a
systemd-boot configuration?
/boot/efi is a fat partition. It has to be fat so the UEFI can
read the files. Usually /boot is an EXT partition.
/boot/efi/ is where the ESP normally goes, not /boot/, at least,
not when using Grub2 EFI, as opposed to one of those newfangled
bootloaders (e.g. systemd-boot) that I have yet to see live in
person. That 'ls -l /' listing is pocket's root directory showing
Dec 31 1969. That means there's a FAT filesystem mounted on /boot/.
He hasn't shown us what if anything is mounted on on /boot/efi/.
I don't have a partition to mount at /boot/efi
nvme drive with a msdos mbr two partitions one vfat and one ext4
Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Corsair MP600 CORE MINI
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb2c58878
Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 8192 1056767 1048576 512M c W95 FAT32
(LBA) /dev/nvme0n1p2 1056768 1953525167 1952468400 931G 83 Linux
What I expect to see with Grub2 EFI is what I see here:
# ls -gGd /boot/
dr-xr-xr-x 4 10240 Dec 3 11:57 /boot/ # typical
mountpoint EXT4 mounted # ls -gGd /boot/efi/
drwxr-xr-x 4 4096 Dec 31 1969 /boot/efi/ # typical
mountpoint FAT mounted # mount | grep boot
/dev/sda1 on /boot/efi type vfat…
#
mount
/dev/nvme0n1p2 on / type ext4 (rw,noatime)
/dev/nvme0n1p1 on /boot/ type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro)
That is all there is folks just two partitions on a nvme drive
Sent: Wednesday, December 04, 2024 at 9:59 AM
From: "Joe" <joe@jretrading.com>
To: debian-user@lists.debian.org
Subject: Re: From SSD to NVME
On Wed, 4 Dec 2024 13:00:21 +0100
pocket@homemail.com wrote:
Sent: Tuesday, December 03, 2024 at 11:18 PM
From: "Felix Miata" <mrmazda@stanis.net>
To: debian-user@lists.debian.org, "Timothy M Butterworth" <timothy.m.butterworth@gmail.com> Subject: Re: From SSD to NVME
Timothy M Butterworth composed on 2024-12-03 20:36 (UTC-0500):
pocket composed on 2024-12-03 12:01 (UTC+0100):
[alarm@alarm ~]$ ls -l /…
lrwxrwxrwx 1 root root 7 Nov 25 19:15 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Dec 31 1969 boot
The rest of what the above was clipped from is in: https://lists.debian.org/debian-user/2024/12/msg00120.html
What Debian puts a FAT filesystem on /boot/? Is that a
systemd-boot configuration?
/boot/efi is a fat partition. It has to be fat so the UEFI can
read the files. Usually /boot is an EXT partition.
/boot/efi/ is where the ESP normally goes, not /boot/, at least,
not when using Grub2 EFI, as opposed to one of those newfangled bootloaders (e.g. systemd-boot) that I have yet to see live in
person. That 'ls -l /' listing is pocket's root directory showing
Dec 31 1969. That means there's a FAT filesystem mounted on /boot/.
He hasn't shown us what if anything is mounted on on /boot/efi/.
I don't have a partition to mount at /boot/efi
nvme drive with a msdos mbr two partitions one vfat and one ext4
Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Corsair MP600 CORE MINI
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb2c58878
Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 8192 1056767 1048576 512M c W95 FAT32
(LBA) /dev/nvme0n1p2 1056768 1953525167 1952468400 931G 83 Linux
What I expect to see with Grub2 EFI is what I see here:
# ls -gGd /boot/
dr-xr-xr-x 4 10240 Dec 3 11:57 /boot/ # typical
mountpoint EXT4 mounted # ls -gGd /boot/efi/
drwxr-xr-x 4 4096 Dec 31 1969 /boot/efi/ # typical
mountpoint FAT mounted # mount | grep boot
/dev/sda1 on /boot/efi type vfat…
#
mount
/dev/nvme0n1p2 on / type ext4 (rw,noatime)
/dev/nvme0n1p1 on /boot/ type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro)
That is all there is folks just two partitions on a nvme drive
The EFI partition (i.e. partition mounted as /boot/efi or the partition containing /boot, which contains /boot/efi) must have some variety of
FAT filesystem, according to the EFI spec. Windows will normally use
ntfs and Debian by default ext4, and a FAT partition has no other real
use now than for EFI. It may be convenient to put the whole of /boot on
FAT, but Debian will normally leave /boot in the main / partition, and
just use FAT for /boot/efi.
The system I am running this on right now has only NVME only.
Note absence of nvme kernel modules and it boots just fine.
grep RETT /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
It has a stock kernel as I have not built a custom kernel
pocket@pocket:~ $ lsinitramfs /boot/initrd.img-6.6.62 | grep -E 'nvme|ata|ahci|piix'
usr/lib/modules/6.6.62/kernel/drivers/ata
usr/lib/modules/6.6.62/kernel/drivers/ata/ahci.ko
usr/lib/modules/6.6.62/kernel/drivers/ata/libahci.ko
usr/lib/modules/6.6.62/kernel/drivers/ata/libata.ko
usr/lib/modules/6.6.62/kernel/drivers/ata/sata_mv.ko
usr/lib/modules/6.6.62/kernel/drivers/usb/storage/ums-datafab.ko
usr/lib/udev/ata_id
usr/bin/fatattr
You very likely would need to add drivers to your initrds first, else have to >rescue boot to rebuild after:
On a side note, last time I tried to install Debian on NVME, it
wouldn't even find the storage device. I hope this has improved since
then.
But I never used NVME drives before and know (shame on me!) not much about it. >If NVME are only super fast SSD's, then it will be easy, but if NVME are a >complete alien hardware, then I might come in trouble (Nothing, that can not >be fixed!).
On Tue, Dec 03, 2024 at 09:44:32 -0500, Felix Miata wrote:
Greg Wooledge composed on 2024-12-03 07:15 (UTC-0500):
On Tue, Dec 03, 2024 at 12:01:15 +0100, pocket wrote:
lrwxrwxrwx 1 root root 7 Nov 25 19:15 sbin -> usr/bin
That's not how Debian 12 has it.
hobbit:~$ ls -ld /sbin /bin
lrwxrwxrwx 1 root root 7 Feb 17 2024 /bin -> usr/bin/
lrwxrwxrwx 1 root root 8 Feb 17 2024 /sbin -> usr/sbin/
If Trixie has done an "sbin merge", it's news to me.
Note that pocket's system has /sbin pointing to usr/bin NOT to usr/sbin.
Bookworm looks to be one of the last of the Mohicans:
I'm not sure what you mean by that.
PRETTY_NAME="Debian GNU/Linux trixie/sid"
lrwxrwxrwx 1 8 Oct 16 2022 sbin -> usr/sbin
PRETTY_NAME="Ubuntu 24.04.1 LTS"
lrwxrwxrwx 1 8 Dec 8 2023 sbin -> usr/sbin
drwxr-xr-x 2 1.0K Feb 2 2024 sbin.usr-is-merged
PRETTY_NAME="KDE neon 6.2"
lrwxrwxrwx 1 8 Jun 2 2024 sbin -> usr/sbin
drwxr-xr-x 2 1.0K Aug 20 2022 sbin.usr-is-merged
PRETTY_NAME="openSUSE Tumbleweed"
lrwxrwxrwx 1 8 Sep 25 15:10 sbin -> usr/sbin
PRETTY_NAME="Fedora Linux 39 (Thirty Nine)"
lrwxrwxrwx 1 8 Jul 20 2023 sbin -> usr/sbin
All of these systems have sbin pointing to usr/sbin (the same as
bookworm), NOT to usr/bin the way pocket's system does.
pocket's system is the outlier here. It's the only one where there
isn't a separate usr/sbin.
.
On 12/3/24 14:32, Greg Wooledge wrote:
Note that pocket's system has /sbin pointing to usr/bin NOT to usr/sbin.
On Tue, Dec 03, 2024 at 09:44:32 -0500, Felix Miata wrote:
Bookworm looks to be one of the last of the Mohicans:
I'm not sure what you mean by that.
That, Greg, refers to a less than pleasant time in 'merican history when the only good American indian was a dead one. The Mohicans were [...]
PRETTY_NAME="Debian GNU/Linux trixie/sid"
lrwxrwxrwx 1 8 Oct 16 2022 sbin -> usr/sbin
PRETTY_NAME="Ubuntu 24.04.1 LTS"
lrwxrwxrwx 1 8 Dec 8 2023 sbin -> usr/sbin
drwxr-xr-x 2 1.0K Feb 2 2024 sbin.usr-is-merged
PRETTY_NAME="KDE neon 6.2"
lrwxrwxrwx 1 8 Jun 2 2024 sbin -> usr/sbin
drwxr-xr-x 2 1.0K Aug 20 2022 sbin.usr-is-merged
PRETTY_NAME="openSUSE Tumbleweed"
lrwxrwxrwx 1 8 Sep 25 15:10 sbin -> usr/sbin
PRETTY_NAME="Fedora Linux 39 (Thirty Nine)"
lrwxrwxrwx 1 8 Jul 20 2023 sbin -> usr/sbin
All of these systems have sbin pointing to usr/sbin (the same as
bookworm), NOT to usr/bin the way pocket's system does.
On Wed, Dec 04, 2024 at 19:06:40 -0500, gene heskett wrote:
On 12/3/24 14:32, Greg Wooledge wrote:
Note that pocket's system has /sbin pointing to usr/bin NOT to usr/sbin.
On Tue, Dec 03, 2024 at 09:44:32 -0500, Felix Miata wrote:
Bookworm looks to be one of the last of the Mohicans:
I'm not sure what you mean by that.
That, Greg, refers to a less than pleasant time in 'merican history when the >> only good American indian was a dead one. The Mohicans were [...]
I'm not talking about the historical reference. I'm talking about
the assertion that bookworm is an outlier.
Bookworm is NOT an outlier here. It's just like all the others. It
has a separate sbin, NOT a subsumed sbin.
See this part?
PRETTY_NAME="Debian GNU/Linux trixie/sid"
lrwxrwxrwx 1 8 Oct 16 2022 sbin -> usr/sbin
PRETTY_NAME="Ubuntu 24.04.1 LTS"
lrwxrwxrwx 1 8 Dec 8 2023 sbin -> usr/sbin
drwxr-xr-x 2 1.0K Feb 2 2024 sbin.usr-is-merged
PRETTY_NAME="KDE neon 6.2"
lrwxrwxrwx 1 8 Jun 2 2024 sbin -> usr/sbin
drwxr-xr-x 2 1.0K Aug 20 2022 sbin.usr-is-merged
PRETTY_NAME="openSUSE Tumbleweed"
lrwxrwxrwx 1 8 Sep 25 15:10 sbin -> usr/sbin
PRETTY_NAME="Fedora Linux 39 (Thirty Nine)"
lrwxrwxrwx 1 8 Jul 20 2023 sbin -> usr/sbin
All of these systems have sbin pointing to usr/sbin (the same as
bookworm), NOT to usr/bin the way pocket's system does.
Pocket's (Arch??) system is the outlier. Not bookworm.
.
Is it different when you boot from an nvme drive? I have what I was told
was one and it appears as /dev/sdb or /dev/sda depending how the OS feels >that day. I didn't buy it new, it was given to me, so I may have been >misinformed. It's a thing that looks like a SIMM, and when it's plugged in >the motherboard disables one of the SATA ports, which is unfortunate.
One somewhat different thing is the
concept of NVMe namespaces: your drive will be /dev/nvme0, but you'll probably be using /dev/nvme0n1 except for device management. Partitions then look like /dev/nvme0n1p1.
I would recommend changing from UUID to labels. Doing so, all you need
to worry is that the new partitions have the same labels as the old
ones.
https://wiki.debian.org/fstab#Labels
On Thu, Dec 05, 2024 at 09:42:08AM -0500, eben@gmx.us wrote:
Is it different when you boot from an nvme drive? I have what I was
told was one and it appears as /dev/sdb or /dev/sda depending how the
OS feels that day. I didn't buy it new, it was given to me, so I may
have been misinformed. It's a thing that looks like a SIMM, and when
it's plugged in the motherboard disables one of the SATA ports, which
is unfortunate.
That is a SATA SSD, not an NVMe.
The SATA drive letters can change based on things like which drive starts
up faster or what removeable devices are plugged in, which is why using
UUIDs or somesuch is preferred over using the device name.
SATA maxes out at 600MB/s, while PCIe is currently at 4000MB/s per lane,
with NVMe drives typically using as many as 4 lanes [16000MB/s].
For many [most?] consumer applications the differences will not be noticable.)
That is a SATA SSD, not an NVMe.Interesting, thanks. Apparently either it was misrepresented to me, or I misremembered. That explains some stuff.
For many [most?] consumer applications the differences will not be
noticable.)
Is one kind more long-lived than the other?
That is a SATA SSD, not an NVMe.Interesting, thanks. Apparently either it was misrepresented to me, or I misremembered. That explains some stuff.
The switch from SATA to the NVMe interface/protocol happened basically
at the same time as the switch from the 2.5" (and mini-pcie) to the M.2 format, so it's a common mistake to consider that for an SSD, "M.2 =>
NVMe" (the implication is currently true in the other direction, tho,
AFAIK).
https://wiki.debian.org/fstab#Labels
I personally prefer UUIDs because the odds of an existing drive from a different system having a conflicting UUID when you put it in another
system is near zero while the odds that another drive would have
something like LABEL=root is very high.
"M.2 => NVMe" (the implication is currently true in the otherNot at all. We have many servers with U.2 and U.3 format disks,
direction, tho, AFAIK).
which look like classic 2.5" SSDs but use NVMe PCIe connections.
Stefan Monnier wrote:
That is a SATA SSD, not an NVMe.Interesting, thanks. Apparently either it was misrepresented to me, or I misremembered. That explains some stuff.
The switch from SATA to the NVMe interface/protocol happened basically
at the same time as the switch from the 2.5" (and mini-pcie) to the M.2 format, so it's a common mistake to consider that for an SSD, "M.2 =>
NVMe" (the implication is currently true in the other direction, tho, AFAIK).
Not at all. We have many servers with U.2 and U.3 format disks,
which look like classic 2.5" SSDs but use NVMe PCIe connections.
I suspect there are few desktops (mostly 'workstation' class
machines) and no laptops using U.2.
How do I tell how many lanes a given drive uses (preferably before purchase)?
Yeah, I probably wouldn't be able to tell. It's just geek points. I was >thinking it might matter when xferring gigabyte+ files to the media server, >but then the bottleneck would either be the CPU encrypting the SSH data, or >the network itself.
Is one kind more long-lived than the other?
Clearly, because it's a seriously inept volume LABEL selection. Among the >following are some better, yet easy enough to remember and type, examples:[snip]
# egrep -i 'deb11|deb 11|seye|bull|debian11|debian 11' *L*txt | grep ├─ | wc -l
26
# egrep -i 'deb11|deb 11|seye|bull|debian11|debian 11' *L*txt | grep ├─ >a-865L10.txt:├─sda28 ext4 SS25deb11 cb7dac29-… >ab250L26.txt:├─nvme0n1p14 ext4 pt3p14deb11 889fea98-…
The *L*txt files are automatically generated partitioner[1] logs with
both both parted -l and lsblk -f output appended, which I use for keeping >track of what's installed where here. Strings like pt3, tm8, m25 & sbyd
above are extractions from disk model and/or serial numbers.
On Thu, Dec 05, 2024 at 12:24:36PM -0500, Felix Miata wrote:
Clearly, because it's a seriously inept volume LABEL selection. Among the >>following are some better, yet easy enough to remember and type, examples: >># egrep -i 'deb11|deb 11|seye|bull|debian11|debian 11' *L*txt | grep ├─ | wc -l[snip]
26
# egrep -i 'deb11|deb 11|seye|bull|debian11|debian 11' *L*txt | grep ├─ >>a-865L10.txt:├─sda28 ext4 SS25deb11 cb7dac29-… >>ab250L26.txt:├─nvme0n1p14 ext4 pt3p14deb11 889fea98-…
Never have I felt any need or desire to do anything like that. If I did,
it would be on an LVM, not on dozens of partitions
The *L*txt files are automatically generated partitioner[1] logs with
both both parted -l and lsblk -f output appended, which I use for keeping >>track of what's installed where here. Strings like pt3, tm8, m25 & sbyd >>above are extractions from disk model and/or serial numbers.
Perhaps we can agree to disagree on what's easy. IMO, your labels are basically as opaque as a UUID, even if systemic, but with the
disadvantage of needing more effort to genreate. :-)
As I understand it the slots in the M2 SSD connector can tell whether
it's SATA or NVMe or both. I have an M2 SSD which I believe will work
either with a SATA connection or with NVMe, and it has two slots in
its connector.
As I understand it the slots in the M2 SSD connector can tell whether
it's SATA or NVMe or both. I have an M2 SSD which I believe will work
either with a SATA connection or with NVMe, and it has two slots in
its connector.
I have more than 40 PCs with well in excess of a dozen installed distros, each on
a partition,
Hi folks,
as promised I send you my experiences with cloning to NVME.
So, today I got my new notebook. As I never used UEFI, I disabled UEFI in BIOS
(my first mistake!), then cloned everything to the new drive.
Firts reboot worked well, no problems. But then I realized, that if you want NVME mode, you MUST use native UEFI in BIOS settings.
However, doing so, neither Debian nor Windows will boot. Of course: There is no EFI partition on my harddrive, as I never needed one (still).
Now I am hasseling with the drive, as I want NVME-mode of course, because it is faster. And of course, I do not want to reinstall everything!
I saw some documentations, how to get EFI on the drive, but it looks, you need
a seperate partition with FAT to get EFI on, right?
However, I saw also the possibility to get EFI on my seperate /boot partition.
What can I do? I would like to keep the existing partitions. However, I could
shrink them. At the moment, my drive looks at this:
primary partition Windows-boot ntfs
primary partition Windows ntfs
primary partition /boot /dev/sda3 ext4
extended partition /dev/sda4
logical partition /dev/sda5 swap
logical partition /dev/sda6 / ext4
logical partition /dev/sda7 encrypted home
logical partition /dev/sda8 encrypted usr
logical partition /dev/sda9 encrypted var
logical partition /dev/sda10 encrypted data
So I could shrinken some partitions and create a new logical one.
Other option would be, delete "swap" partition and make a new "EFI" partition.
What do you think, might be the best way?
Some better ideas?
Thanks for reading this.
Best regards
Hans
What can I do? I would like to keep the existing partitions. However, I could >shrink them. At the moment, my drive looks at this:
primary partition Windows-boot ntfs
primary partition Windows ntfs
primary partition /boot /dev/sda3 ext4
extended partition /dev/sda4
logical partition /dev/sda5 swap
logical partition /dev/sda6 / ext4
logical partition /dev/sda7 encrypted home
logical partition /dev/sda8 encrypted usr
logical partition /dev/sda9 encrypted var
logical partition /dev/sda10 encrypted data
On Thu, Dec 05, 2024 at 10:55:48AM -0500, eben@gmx.us wrote:
How do I tell how many lanes a given drive uses (preferably before purchase)?
It would be buried in the technical docs. I've only seen 4x drives (but I'm sure there may be some cheaper drives with fewer). On the motherboard side it's common to see 2 lanes in some slots for the simple reason that there
are a limited number of lanes from the CPU--most people would rather have a slower-connected drive than none at all.
E.g.: my motherboard has something like 4x v5 + 4x v4 + 2x v4 + 4x v3. Let's say I have 2 v4 drives and 1 v3 drive. If I put one v4 drive in the 4x v5 slot, one in the 4x v4 slot, and the v3 drive in the 4x v3 slot, all the drives will operate at their peak efficiency. If I put a 4x v4 drive in the 2x v4 or 4x v3 slot, it will operate at the same lower level (half the peak bandwidth). Also, if I put the v3 drive in the 2x v4 slot it will only be able to use half of its bandwidth, because it will only run at 2x v3 (as it is a v3 drive). Bottom line, it's worth checking the motherboard documentation if you have multiple M.2 slots, but only because it costs nothing to do so.
Is one kind more long-lived than the other?
Not due specifically to the interface. At the same price point you'll probably have similar longevity, though sata drives are moving in the direction of less bang for the buck because there aren't many new ones being developed and the sales volume is going NVMe.
Sent: Thursday, December 05, 2024 at 2:24 PM
From: "Hans" <hans.ullrich@loop.de>
To: debian-user@lists.debian.org
Subject: Re: From SSD to NVME
Hi folks,
as promised I send you my experiences with cloning to NVME.
So, today I got my new notebook. As I never used UEFI, I disabled UEFI in BIOS
(my first mistake!), then cloned everything to the new drive.
Firts reboot worked well, no problems. But then I realized, that if you want NVME mode, you MUST use native UEFI in BIOS settings.
However, doing so, neither Debian nor Windows will boot. Of course: There is no EFI partition on my harddrive, as I never needed one (still).
Now I am hasseling with the drive, as I want NVME-mode of course, because it is faster. And of course, I do not want to reinstall everything!
I saw some documentations, how to get EFI on the drive, but it looks, you need
a seperate partition with FAT to get EFI on, right?
However, I saw also the possibility to get EFI on my seperate /boot partition.
What can I do? I would like to keep the existing partitions. However, I could
shrink them. At the moment, my drive looks at this:
primary partition Windows-boot ntfs
primary partition Windows ntfs
primary partition /boot /dev/sda3 ext4
extended partition /dev/sda4
logical partition /dev/sda5 swap
logical partition /dev/sda6 / ext4
logical partition /dev/sda7 encrypted home
logical partition /dev/sda8 encrypted usr
logical partition /dev/sda9 encrypted var
logical partition /dev/sda10 encrypted data
So I could shrinken some partitions and create a new logical one.
Other option would be, delete "swap" partition and make a new "EFI" partition.
What do you think, might be the best way?
Some better ideas?
Thanks for reading this.
Best regards
Hans
Use the Microsoft tools to create a Windows .iso file
Install Windows from a .iso file. Use Windows drive tools to shrink Windows on the drive to make some space.
Then use something like gparted to move the Windows to the end of the drive.
On Thu, Dec 05, 2024 at 02:15:13PM -0500, Felix Miata wrote:
I have more than 40 PCs with well in excess of a dozen installed distros, each on
a partition,
You have a unique set of requirements. Probably that has little
relevance to basically anyone else.
You can't just dd the disks if you have an old style dos partition
table, you need to create a GPT partition table on the new drive, then
dd the individual partitions. I'm unaware of a tool that would automated this, though one may exist.
So, today I got my new notebook. As I never used UEFI, I disabled UEFI in BIOS
(my first mistake!), then cloned everything to the new drive.
Now I am hasseling with the drive, as I want NVME-mode of course, because it is faster. And of course, I do not want to reinstall everything!
I saw some documentations, how to get EFI on the drive, but it looks, you need
a seperate partition with FAT to get EFI on, right?
However, I saw also the possibility to get EFI on my seperate /boot partition.
What can I do? I would like to keep the existing partitions. However, I could
shrink them. At the moment, my drive looks at this:
primary partition Windows-boot ntfs
primary partition Windows ntfs
primary partition /boot /dev/sda3 ext4
extended partition /dev/sda4
logical partition /dev/sda5 swap
logical partition /dev/sda6 / ext4
logical partition /dev/sda7 encrypted home
logical partition /dev/sda8 encrypted usr
logical partition /dev/sda9 encrypted var
logical partition /dev/sda10 encrypted data
What do you think, might be the best way?
Some better ideas?
The real issue is that the efi partition if I recall correctly has to be a primary partition.
The ESP filesystem must be on a GPT partition.
Felix Miata:
The ESP filesystem must be on a GPT partition.
Not always.
At least one does. I provided URL to the one I use, for some definition of >"automated", upthread @2024-12-05 12:24 (UTC-0500) in reply to your post 102 >minutes earlier. :)
On Thu 05 Dec 2024 at 20:01:29 (+0000), Andrew M.A. Cater wrote:
Use the Microsoft tools to create a Windows .iso file
Install Windows from a .iso file. Use Windows drive tools to shrink Windows on the drive to make some space.
Then use something like gparted to move the Windows to the end of the drive.
Do you mean specifically the end of the drive,
or just at one end or the other? Reasoning?
Cheers,
David.
Where else is possible?
2. Use gparted to move Windows (maybe apart from the EFI partition) to the
end of the drive - move the blank space to the front of the drive after
the EFI partiton.
You might be able to do it all with one EFI partition.
To find out if the motherboard imposed any limitations, I checked the
manual. I found these tables, which I can't see the implications of:
M2D_32G M.2 connector >+-------------+---------+---------+---------+---------+---------+---------+ >|\ Connector | SATA3_0 | SATA3_1 | SATA3_2 | SATA3_3 | SATA3_4 | SATA3_5 |
+ \----------\+---------+---------+---------+---------+---------+---------+
| Type of SSD | SATA_Express | SATA_Express | - |
2. Use gparted to move Windows (maybe apart from the EFI partition)
to the end of the drive - move the blank space to the front of the
drive after the EFI partiton.
On Thu, Dec 05, 2024 at 16:16:53 -0500, Felix Miata wrote:
At least one does. I provided URL to the one I use, for some definition of >>"automated", upthread @2024-12-05 12:24 (UTC-0500) in reply to your post 102 >>minutes earlier. :)
Automated means something along the lines of "make this mbr disk into a working gpt disk at the push of a button". I dont see that your tool
does that, but I'm also not interested in digging through the
documentation in any detail.
On Thu, Dec 05, 2024 at 04:06:17PM -0500, eben@gmx.us wrote:
To find out if the motherboard imposed any limitations, I checked the
manual. I found these tables, which I can't see the implications of:
M2D_32G M.2 connector
+-------------+---------+---------+---------+---------+---------+---------+ >> |\ Connector | SATA3_0 | SATA3_1 | SATA3_2 | SATA3_3 | SATA3_4 | SATA3_5 | >> | \----------\+---------+---------+---------+---------+---------+---------+ >> | Type of SSD | SATA_Express | SATA_Express | - |
Ah, SATA express (SATAe). That's a dead standard that never actually got implemented in a drive (as far as I know) but was included on
motherboards for some time before it was clear that M.2 won and SATAe
was a dead end.
The table is trying to explain which combinations won't work.
You can ignore the dark lines because SATAe doesn't exist.
Simple, right? :-D
Most desktop motherboards have some sort of limitations/sharing like this because there are only so many PCIe lanes from the CPU, but they vary in
how well they communicate the information.
On Thu, Dec 05, 2024 at 03:15:36PM -0600, David Wright wrote:
On Thu 05 Dec 2024 at 20:01:29 (+0000), Andrew M.A. Cater wrote:
Use the Microsoft tools to create a Windows .iso file
Install Windows from a .iso file. Use Windows drive tools to shrink Windows
on the drive to make some space.
Then use something like gparted to move the Windows to the end of the drive.
Do you mean specifically the end of the drive,
or just at one end or the other? Reasoning?
1. Install Windows to the whole drive - it's what Windows does :)
2. Use gparted to move Windows (maybe apart from the EFI partition) to the
end of the drive - move the blank space to the front of the drive after
the EFI partiton.
3. Install Debian in the blank space.
You might be able to do it all with one EFI partition. I think I found it easier to put Windows on first - because it's fussy, then install Debian
but I may have done it both ways round in the past. Installing Windows
second is definitely harder if I recall correctly.
On Thu, 5 Dec 2024 22:03:52 +0000
"Andrew M.A. Cater" <amacater@einval.com> wrote:
2. Use gparted to move Windows (maybe apart from the EFI partition)
to the end of the drive - move the blank space to the front of the
drive after the EFI partiton.
OK, my curiosity is up. Why make a point of moving the Windows
partition to the end of the drive?
--
Does anybody read signatures any more?
https://charlescurley.com
https://charlescurley.com/blog/
OK, my curiosity is up. Why make a point of moving the Windows
partition to the end of the drive?
I seem to remember that it's significantly difficult to forecast the
likely size you'll want for a Windows system to allow room for
updates. I sized it at something like 70G and moved it to the end of
the drive so that it didn't try to expand further into what Windows
might regard as free space. (70G does allow room for a few Windows
updates, all of which are larger than you think).
I then used Debian to fill the blank space and re-used Microsoft's
EFI partition.
It's a while ago: I think since then I've virtualised Windows 11 on a
kvm VM.
On Thu, Dec 05, 2024 at 10:55:48AM -0500, eben@gmx.us wrote:
How do I tell how many lanes a given drive uses (preferably before purchase)?
It would be buried in the technical docs. I've only seen 4x drives
(but I'm sure there may be some cheaper drives with fewer).
Yes. Also not many drives can sustain a multi-gigabyte write rate
anyway...
Yes. Also not many drives can sustain a multi-gigabyte write rate
anyway...
I have to say I was quite disappointed when I cloned a 1TB SSD to a 2TB
one, average speed wasn't much higher than writing to an HD. I don't
remember what the target drive was though. Since I don't intend to make
a habit of this, no big deal, but I wonder what kind of write speed one
could expect in a sustained write of 1TB?
Michael Stone <mstone@debian.org> writes:
On Thu, Dec 05, 2024 at 10:55:48AM -0500, eben@gmx.us wrote:
How do I tell how many lanes a given drive uses (preferably before purchase)?
It would be buried in the technical docs. I've only seen 4x drives
(but I'm sure there may be some cheaper drives with fewer).
While we're on the topic PCIe lanes and SSDs, I've been looking into
some way of usiing old NVME SSDs when they get replaced by bigger
ones. I don't really want to have a stack of little m.2 USB boxes.
There are some PCIe adapter boards that take two or more SSDs but what
isn't clear to me is if those cards can work in the typically free x1
PCIe slot, if the cards are x4 or x8 and drives are x4?
Yes. Also not many drives can sustain a multi-gigabyte write rate
anyway...
I have to say I was quite disappointed when I cloned a 1TB SSD to a 2TB
one, average speed wasn't much higher than writing to an HD. I don't
remember what the target drive was though. Since I don't intend to make
a habit of this, no big deal, but I wonder what kind of write speed one
could expect in a sustained write of 1TB?
One of the tests that servethehome.com does in reviewing SSDs is the
write speed after cache saturation: that is, once you have sent enough gigabytes in a row, what is the ongoing write speed?
As a general matter PCIe devices can/will downgrade...
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 02:43:56 |
Calls: | 10,387 |
Calls today: | 2 |
Files: | 14,061 |
Messages: | 6,416,755 |