I recently got some SSDs, and decided to use one of them (a 256G model) to boot from. I want the change to be undetectable, in that from a user perspective, nothing seems different, just faster.
I currently have a 2T HD, partitioned with GPT but booting by MBR. Yes, that's probably weird. When I installed Debian I was unaware that the installer would only install grub to boot using the method that the
installer booted. My BIOS/firmware will boot using either method, but defaults to MBR if both methods work. You can force it to use UEFI on a one-time basis. I want the SSD to boot using UEFI. Is that possible, and
if so, what's the best method to go about it?
My ideas are:
1. dd / onto the SSD, then modify it to boot UEFI. This sounds hard.
2. Install Debian (the same version I run) onto the SSD, then modify
/etc and whatever else so stuff works. This sounds error-prone.
3. Wait until I upgrade to Trixie, then let the installer hash it out.
No idea what sdd isMight it be an SD card/CF card slot with no media inserted? One of mine
I recently got some SSDs, and decided to use one of them (a 256G model)
to boot from. I want the change to be undetectable, in that from a user perspective, nothing seems different, just faster.
I currently have a 2T HD, partitioned with GPT but booting by MBR. Yes, that's probably weird. When I installed Debian I was unaware that the installer would only install grub to boot using the method that the
installer booted. My BIOS/firmware will boot using either method, but defaults to MBR if both methods work. You can force it to use UEFI on a one-time basis. I want the SSD to boot using UEFI. Is that possible,
and if so, what's the best method to go about it?
My ideas are:
1. dd / onto the SSD, then modify it to boot UEFI. This sounds hard.
2. Install Debian (the same version I run) onto the SSD, then modify
/etc and whatever else so stuff works. This sounds error-prone.
3. Wait until I upgrade to Trixie, then let the installer hash it out.
I would go for the first option (dd)
eben@cerberus:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 238.5G 0 disk
└─sda1 8:1 0 238.5G 0 part
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 953M 0 part /boot
├─sdb2 8:18 0 2G 0 part /
├─sdb3 8:19 0 20G 0 part /usr
├─sdb7 8:23 0 30G 0 part /misc/export
├─sdb8 8:24 0 130G 0 part /misc/media
├─sdb9 8:25 0 165G 0 part /misc/mp3
├─sdb10 8:26 0 74G 0 part /misc/torrent
├─sdb11 8:27 0 9G 0 part /home
├─sdb12 8:28 0 75G 0 part /misc/scratch
└─sdb13 8:29 0 48G 0 part [SWAP]
sdc 8:32 0 238.5G 0 disk
├─sdc1 8:33 0 5.1G 0 part /var/cache
└─sdc2 8:34 0 182.7G 0 part /misc/iso
No idea what sdd is:
That would work to copy the whole system to a larger drive.
Detlef Vollmann (HE12025-07-31):
I would go for the first option (dd)
That will not work. At least not with a lot lot more details. That would
work to copy the whole system to a larger drive. For this setup it is a terrible idea.
On 7/31/25 14:53, Andy Smith wrote:
1. From single user mode or a live environment (even the "rescue" mode
from the Debain install medium) rsync the contents of /, /boot and
/usr to the single sda1 partition.
Why do you want to switch to UEFI?
I recently got some SSDs, and decided to use one of them (a 256G model)
to boot from. I want the change to be undetectable, in that from a user perspective, nothing seems different, just faster.
I currently have a 2T HD, partitioned with GPT but booting by MBR. Yes, that's probably weird. When I installed Debian I was unaware that the installer would only install grub to boot using the method that the
installer booted. My BIOS/firmware will boot using either method, but defaults to MBR if both methods work. You can force it to use UEFI on a one-time basis. I want the SSD to boot using UEFI. Is that possible,
and if so, what's the best method to go about it?
My ideas are:
1. dd / onto the SSD, then modify it to boot UEFI. This sounds hard.
2. Install Debian (the same version I run) onto the SSD, then modify
/etc and whatever else so stuff works. This sounds error-prone.
3. Wait until I upgrade to Trixie, then let the installer hash it out.
eben@cerberus:~$ lsblk/mnt/nascent-Media
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 238.5G 0 disk
└─sda1 8:1 0 238.5G 0 part
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 953M 0 part /boot
├─sdb2 8:18 0 2G 0 part /
├─sdb3 8:19 0 20G 0 part /usr
├─sdb5 8:21 0 953M 0 part
├─sdb6 8:22 0 300G 0 part
├─sdb7 8:23 0 30G 0 part /misc/export
├─sdb8 8:24 0 130G 0 part /misc/media
├─sdb9 8:25 0 165G 0 part /misc/mp3
├─sdb10 8:26 0 74G 0 part /misc/torrent
├─sdb11 8:27 0 9G 0 part /home
├─sdb12 8:28 0 75G 0 part /misc/scratch
└─sdb13 8:29 0 48G 0 part [SWAP]
sdc 8:32 0 238.5G 0 disk
├─sdc1 8:33 0 5.1G 0 part /var/cache
└─sdc2 8:34 0 182.7G 0 part /misc/iso
sdd 8:48 1 0B 0 disk
sr0 11:0 1 7.5G 0 rom
0
eben@cerberus:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 16132328 0 16132328 0% /dev
tmpfs 3229464 1796 3227668 1% /run
/dev/sdb2 2047208 802856 1123116 42% /
/dev/sdb3 20557912 8146532 11439652 42% /usr
tmpfs 16147316 71380 16075936 1% /dev/shm
tmpfs 5120 16 5104 1% /run/lock
/dev/sdc1 5157164 1373336 3501616 29% /var/cache
/dev/sdc2 187459092 79418636 98445320 45% /misc/iso
/dev/sdb7 30786644 19419460 9777988 67% /misc/export /dev/sdb1 941740 132468 744096 16% /boot
/dev/sdb8 133589828 122712680 4045076 97% /misc/media /dev/sdb12 76832012 43023296 29860172 60% /misc/scratch /dev/sdb9 169191044 156127788 4396124 98% /misc/mp3
/dev/sdb10 75799884 46825720 25078052 66% /misc/torrent /dev/sdb11 9278492 7747788 1042472 89% /home
tmpfs 3229460 2484 3226976 1% /run/user/1000 nascent:/nfs/Media 1918708224 774040384 1125174848 41%
0
eben@cerberus:~$
sda is the new SSD. sdb is my HD. sdc is another SSD. nascent is a
NAS. No idea what sdd is:
eben@cerberus:~$ sudo fdisk -l /dev/sdd
fdisk: cannot open /dev/sdd: No medium found
I recently got some SSDs, and decided to use one of them (a 256G
model) to boot from. I want the change to be undetectable, in that
from a user perspective, nothing seems different, just faster.
I currently have a 2T HD, partitioned with GPT but booting by MBR.
Yes, that's probably weird. When I installed Debian I was unaware
that the installer would only install grub to boot using the method
that the installer booted. My BIOS/firmware will boot using either
method, but defaults to MBR if both methods work. You can force it
to use UEFI on a one-time basis. I want the SSD to boot using UEFI.
Is that possible, and if so, what's the best method to go about it?
My ideas are:
1. dd / onto the SSD, then modify it to boot UEFI. This sounds hard.
2. Install Debian (the same version I run) onto the SSD, then modify
/etc and whatever else so stuff works. This sounds error-prone.
3. Wait until I upgrade to Trixie, then let the installer hash it out.
I would:
1. Backup the computer and the NAS.
2. Move as much data as possible from /dev/sdb HDD to the NAS. Leave home directory login, profile, desktop environment, app configuration/ profile, etc. files local to the HDD. Empty trash, clean caches, remove scratch files, etc..
3. Run zerofree(8) on all of the HDD file systems.
4. Take a compressed image of the HDD.
5. Disconnect HDD and /dev/sdc SSD.
6. Boot computer into Setup and restore settings to factory defaults.
7. Boot manufacturer diagnostic or live Debian instance, and secure erase /dev/sda SSD.
8. Install Debian on /dev/sda SSD.
9. Reconnect HDD and /dev/sdc SSD. Restore system configuration and required data.
10. Take an image of /dev/sda SSD.
11. Backup the computer and the NAS.
On 7/31/25 17:31, David Christensen wrote:
On 7/31/25 10:18, Eben King wrote:
I recently got some SSDs, and decided to use one of them (a 256G
model) to boot from. I want the change to be undetectable, in that
from a user perspective, nothing seems different, just faster.
I would:
<snip>
9. ... Restore system configuration and required data.
That is the bit I'm not sure how to do so the change is mostly
undetectable.
Hi,
On Thu, Jul 31, 2025 at 02:31:44PM -0700, David Christensen wrote:
<snip>
When OP asks how to add a new SSD to their system and move their boot
drive to it, it seems really excessive that you advise moving off
hundreds of gigabytes of data, physically removing two other unrelated
drives and then doing a complete reinstall.
I guess we could all go to OP's home, rip everything out and rebuild it
in our own desired way.
Surely if they are wanting to reinstall Debian they wouldn't need to ask
any of this and could just do it, UEFI boot and all, without needing to
be told to back up and restore?
Thanks,
Andy
I once heard a speaker who worked as a Linux system administrator on Wall Street state:
David Christensen (HE12025-08-01):
I once heard a speaker who worked as a Linux system administrator on Wall
Street state:
Maybe do not give advice tailored for somebody who is in charge of
dozens computers, with plenty of spare disks at hand, to somebody who apparently is just an individual who wants to upgrade their computer
cleanly but without too much work?
Regards,
On the other hand, /var on the same filesystem as the rest of / is not a
good idea.
On 8/1/25 04:08, David Christensen wrote:
The key is disaster preparedness and disaster recovery.I do back up my entire drive weekly. NAS too. However, I recently
<snip>
looked around for another 2T drive to implement two-level backups and
didn't have one. Can't afford one right now either.
Learn how to write shell scripts (or Perl, Python, etc.), so that you
can automate repetitive tasks and get consistent results.
I write shell scripts (sh, occasionally bash) for that purpose.
Do not be afraid to spend money on a spare computer, spare parts, and
bigger HDD's.
It's not that I'm afraid, it's that I'd rather keep the car, keep us
fed, etc. than do that. Not everyone has enough disposable income to do things the recommended way.
Keep records of anything and everything that matters.
I have a log file where I note important stuff.
Do you want to mount /root r-o? /etc? I think not.
For what the OP is up to, mounting the old file systems (on the HD)
until he is satisfied he has everything working right is probably a
good idea.
You typically want to mount / as read-only; while you want to mount
/var as read-write. Or some people want to mount the filesystem
read-only.
Charles Curley (HE12025-08-01):
Do you want to mount /root r-o? /etc? I think not.
Separating the things that move a lot and the things that are stable
is still a good idea.
For what the OP is up to, mounting the old file systems (on the HD)
until he is satisfied he has everything working right is probably a
good idea.
No, it is an excellent occasion to rework the partitioning, wasting is
not a good idea.
Also, the partitions will probably not fit the new disk exactly,
leaving a useless clump with an awkward at the end.
It is a much better idea to create new partitions, choosing the
repartition and the sizes in the light of past uses of the system,
mount them with their intended structure and copy the contents,
letting Linux worry about the split.
And even better to do it with LVM volumes rather than partitions.
^^^^^Separating the things that move a lot and the things that are stable
is still a good idea.
Right. With /root r-o, you never get your shell history saved. And
things do change from time to time under /etc.
No, it is an excellent occasion to rework the partitioning, wasting isAgree. I don't think the ideas are mutually exclusive.
not a good idea.
To give the appearance of having copied everything over, he can
use symlinks.
Agree, except I'm not sure what you mean by "letting Linux worry about
the split".
Again, agree. OP didn't indicate that any of his partitions were LVs,
so I didn't suggest it.
None are currently. If volume management becomes a big deal, I will implement it immediately after a backup. Currently I do not have enough resizing of partitions to make it worthwhile.
I have made these partitions:
Device Start End Sectors Size Type
/dev/sda1 2048 1953792 1951745 953M Linux filesystem
/dev/sda2 1955840 6150144 4194305 2G Linux filesystem
/dev/sda3 6152192 48095232 41943041 20G Linux filesystem
/dev/sda4 48097280 48195583 98304 48M Linux swap
/dev/sda5 48195584 67069952 18874369 9G Linux filesystem
/dev/sda6 67072000 69023744 1951745 953M BIOS boot
Grub installs without error on that drive, but drops me into grub's command line when I boot from it. Then when I do
boot (hd0,gptN)
I have made these partitions:
Device Start End Sectors Size Type
/dev/sda1 2048 1953792 1951745 953M Linux filesystem
/dev/sda2 1955840 6150144 4194305 2G Linux filesystem
/dev/sda3 6152192 48095232 41943041 20G Linux filesystem
/dev/sda4 48097280 48195583 98304 48M Linux swap
/dev/sda5 48195584 67069952 18874369 9G Linux filesystem
/dev/sda6 67072000 69023744 1951745 953M BIOS boot
Grub installs without error on that drive, but drops me into grub's
command line when I boot from it. Then when I do
boot (hd0,gptN)
for N in 1 or 2 (/boot or /root) it tells me that I have to pick a
kernel first.
I have these files:
eben@cerberus:~$ sudo mount -o ro /dev/sda2 /mnt/temp
eben@cerberus:~$ sudo mount -o ro /dev/sda1 /mnt/temp/boot
eben@cerberus:~$ ls -l /mnt/temp/vm*
0 lrwxrwxrwx 1 root root 27 May 25 13:01 /mnt/temp/vmlinuz -> boot/vmlinuz-6.1.0-37-amd64
0 lrwxrwxrwx 1 root root 27 May 25 13:01 /mnt/temp/vmlinuz.old -> boot/vmlinuz-6.1.0-35-amd64
eben@cerberus:~$ ls -l /mnt/temp/boot/v*
7.9M -rw-r--r-- 1 root root 7.9M Apr 10 15:32 /mnt/temp/boot/vmlinuz-6.1.0-33-amd64
7.9M -rw-r--r-- 1 root root 7.9M Apr 25 15:51 /mnt/temp/boot/vmlinuz-6.1.0-34-amd64
7.9M -rw-r--r-- 1 root root 7.9M May 7 11:10 /mnt/temp/boot/vmlinuz-6.1.0-35-amd64
7.9M -rw-r--r-- 1 root root 7.9M May 22 14:32 /mnt/temp/boot/vmlinuz-6.1.0-37-amd64
What does it want from me?
The HD (currently sdc) boots fine. The BIOS (or whatever) doesn't offer
it as a boot device, but I can do F12 at POST = "select boot device",
pick it, and it works.
None are currently. If volume management becomes a big deal, I will implement it immediately after a backup. Currently I do not have enough resizing of partitions to make it worthwhile.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (0 / 16) |
Uptime: | 156:05:49 |
Calls: | 10,384 |
Calls today: | 1 |
Files: | 14,056 |
Messages: | 6,416,468 |