hello all, some questions at last... it's been a while. :)
I was able to get some SSD replacements and want to add them
to my existing setup, but in previous years I recall that there
was some recommendation to leave some part of the SSD unallocated
and not formatted as part of a file system so any parts that
failed as bad blocks or wore out could be allocated from these
unused areas.
I do like having multiple backups on the different SSDs just
in case one of them decides to fail on me. No signs of any
troubles so far.
On Jul 10, 2025, songbird wrote:
hello all, some questions at last... it's been a while. :)
I was able to get some SSD replacements and want to add them
to my existing setup, but in previous years I recall that there
was some recommendation to leave some part of the SSD unallocated
and not formatted as part of a file system so any parts that
failed as bad blocks or wore out could be allocated from these
unused areas.
The SSD has this onboard (and, IIRC, always has) -- for example, a 1T
drive may have 100G of extra blocks you cannot allocate.
I do like having multiple backups on the different SSDs just
in case one of them decides to fail on me. No signs of any
troubles so far.
Longterm (powered-off) backup is actually better on spinning rust; as
SSD are somewhat more susceptible to bit-rot when powered off.
in previous years I recall that there was some recommendation to leave
some part of the SSD unallocated and not formatted as part of a file
system so any parts that failed as bad blocks or wore out could be
allocated from these unused areas.
When trying to see what current recommendations are for setting
up SSDs I see no mentions of this at all? Has this changed?
Pretty much my current plan for one of the SSDs would be
to put an efi small partition(as I notice the current ones I have
hardly have anything on them even if they were allocated to be 1G)
so that I can copy my current setup to that but not waste the
space). The existing ones use 5M or even much less so perhaps 50M
will be enough allowing for future expansion?
The rest of the new drive will just be one large partition.
The 2nd new SSD will be for consolidating my backups (that are on
a smaller SSD at the moment plus also on an external drive that is
not used frequently - I don't trust it as it has been knocked off
the table but until it gives up entirely it is a backup that can't
be messed with as it is not mounted or powered on often).
I don't use the discard options on the mounts or filesystems
and also don't run fstrim automatically, I will eventually set
this up to run monthly.
Dan Purgert wrote:
Longterm (powered-off) backup is actually better on spinning rust; as
SSD are somewhat more susceptible to bit-rot when powered off.
these will always be regularly powered on even if they are
not mounted or used. the external drive i have is a normal
spinning rust drive.
hello all, some questions at last... it's been a while. :)
I was able to get some SSD replacements and want to add them
to my existing setup,
but in previous years I recall that there
was some recommendation to leave some part of the SSD unallocated
and not formatted as part of a file system so any parts that
failed as bad blocks or wore out could be allocated from these
unused areas.
When trying to see what current recommendations are for setting
up SSDs I see no mentions of this at all? Has this changed? I've
been trying to get caught up and seeing nothing specific to EXT4
or Linux for SSDs.
I don't do encryption or raid.
Pretty much my current plan for one of the SSDs would be
to put an efi small partition(as I notice the current ones I have
hardly have anything on them even if they were allocated to be 1G)
so that I can copy my current setup to that but not waste the
space). The existing ones use 5M or even much less so perhaps 50M
will be enough allowing for future expansion? The rest of the
new drive will just be one large partition. It is not a heavily
used machine or setup but I do need more space for working on
the website and picture archives.
I do have both Grub and Refind installed (for some Grub updates
it will change my initial efi boot order so I have a script setup
to change it back when needed). Refind works and does exactly
what I want.
Because I do run Debian testing most of the time I also plan
on keeping my other partition where it boots stable going. I've
only needed it a few times but I like having it there. I'll
leave this on the smaller SSD along with the swap partition
(which is not frequently or heavily used).
The 2nd new SSD will be for consolidating my backups (that are on
a smaller SSD at the moment plus also on an external drive that is
not used frequently - I don't trust it as it has been knocked off
the table but until it gives up entirely it is a backup that can't
be messed with as it is not mounted or powered on often).
I don't use the discard options on the mounts or filesystems
and also don't run fstrim automatically, I will eventually set
this up to run monthly. I ran it recently for the first time
after several years of use of the existing SSDs. I've not noticed
any decline in the existing SSD speeds, etc. at all but I'm also
not running too much that is demanding for performance.
I do like having multiple backups on the different SSDs just
in case one of them decides to fail on me. No signs of any
troubles so far.
songbird
Hi,...
On Thu, Jul 10, 2025 at 07:07:03AM -0400, songbird wrote:
...When trying to see what current recommendations are for setting
up SSDs I see no mentions of this at all? Has this changed?
Just don't worry about it unless you have an unusually heavy write load.
(Just do "smartcl -A /dev/blah" to see all the attributes without the
JSON output I used just to make it presentable in this email.)
The recommended size of an EFI SYstem Partition (ESP) is up for debate
and is not related to what kind of drive you put it on:
https://wiki.debian.org/UEFI#EFI_System_Partition_.28ESP.29_recommended_size
The rest of the new drive will just be one large partition.
RAID is worth it so as not to have to stop working to reinstall from
backups.
The 2nd new SSD will be for consolidating my backups (that are on
a smaller SSD at the moment plus also on an external drive that is
not used frequently - I don't trust it as it has been knocked off
the table but until it gives up entirely it is a backup that can't
be messed with as it is not mounted or powered on often).
SSDs have no moving parts so withstand sudden impacts a lot better than
HDDs do. It's probably fine.
I don't use the discard options on the mounts or filesystems
and also don't run fstrim automatically, I will eventually set
this up to run monthly.
fstrim runs by default on all Debian installs for years so you must have
gone out of your way to disable this. Why?
Why do you recommend that? Are you assuming the SSDs songbird got areNot the OP, but you never know what's on the disks. It wouldn't be the
used, or do you recommend that even for new SSDs -- if so, why?
On 2025-07-12 15:19, rhkramer@gmail.com wrote:
Why do you recommend that? Are you assuming the SSDs songbird got areNot the OP, but you never know what's on the disks. It wouldn't be the first time new disks contain unwanted "presents" straight from the factory.
used, or do you recommend that even for new SSDs -- if so, why?
On Sat, Jul 12, 2025 at 12:14 PM <rhkramer@gmail.com> wrote:
On Thursday, July 10, 2025 10:41:18 PM David Christensen wrote:
On 7/10/25 04:07, songbird wrote:[...]
Be sure to do a secure erase before you put the SSD's into service:
https://en.wikipedia.org/wiki/Secure_Erase#Secure_erase
Why do you recommend that? Are you assuming the SSDs songbird got are used, or do you recommend that even for new SSDs -- if so, why?
From <https://www.zdnet.com/article/malware-found-on-new-hard-drives/>:
... Practice "safe sectors" and scan, or preferably wipe, all drives
before bringing them into the ecosystem. Dont assume that a drive is
going to be blank and malware free. Trust no one. Same goes for USB
flash drives - you never know what's been installed on them.
On Thursday, July 10, 2025 10:41:18 PM David Christensen wrote:
On 7/10/25 04:07, songbird wrote:
I was able to get some SSD replacements and want to add them
to my existing setup,
Be sure to do a secure erase before you put the SSD's into service:
https://en.wikipedia.org/wiki/Secure_Erase#Secure_erase
Why do you recommend that? Are you assuming the SSDs songbird got are used, or do you recommend that even for new SSDs -- if so, why?
Thanks!
On Thursday, July 10, 2025 10:41:18 PM David Christensen wrote:
On 7/10/25 04:07, songbird wrote:
I was able to get some SSD replacements and want to add them
to my existing setup,
Be sure to do a secure erase before you put the SSD's into service:
https://en.wikipedia.org/wiki/Secure_Erase#Secure_erase
Why do you recommend that? Are you assuming the SSDs songbird got are used, or do you recommend that even for new SSDs -- if so, why?
Thanks!
On Sat, Jul 12, 2025 at 3:12 PM <tomas@tuxteam.de> wrote:
On Sat, Jul 12, 2025 at 01:03:23PM -0400, Jeffrey Walton wrote:
On Sat, Jul 12, 2025 at 12:14 PM <rhkramer@gmail.com> wrote:
On Thursday, July 10, 2025 10:41:18 PM David Christensen wrote:
On 7/10/25 04:07, songbird wrote:[...]
Be sure to do a secure erase before you put the SSD's into service:
https://en.wikipedia.org/wiki/Secure_Erase#Secure_erase
Why do you recommend that? Are you assuming the SSDs songbird got are used, or do you recommend that even for new SSDs -- if so, why?
From <https://www.zdnet.com/article/malware-found-on-new-hard-drives/>:
... Practice "safe sectors" and scan, or preferably wipe, all drives
before bringing them into the ecosystem. Dont assume that a drive is
going to be blank and malware free. Trust no one. Same goes for USB
flash drives - you never know what's been installed on them.
I have a hard time imagining how a malware on a disk can do
anything once you've put new file systems on it.
Of course, if you mount their file systems unchanged...
I suspect it is a bigger problem on WIndows, which most malware is
written for and where derives get automounted on insertion: <https://en.wikipedia.org/wiki/2008_malware_infection_of_the_United_States_Department_of_Defense>.
But I don't think it is limited to Windows. I recall a recent thread
about maliciously corrupt filesystems affecting Linux: <https://www.openwall.com/lists/oss-security/2025/06/03/2>. The kernel
would not fix it because they said users should not mount a corrupt filesystem. Ubuntu had to create and apply patches because of
automounting for users.
On 11/07/2025 09:41, David Christensen wrote:
AIUI SSD over-provisioning combined with setting the discard flag in
fstab(5) provides maximum performance for write intensive workloads.
Is it better than fstrim.timer mentioned in this thread?
Some years ago there was a warning on the <https://wiki.archlinux.org/title/Solid_state_drive/NVMe>
page that Intel did not recommend continuous TRIM aka discard. Currently there are some words against discard in <https://wiki.archlinux.org/title/Solid_state_drive#Continuous_TRIM>.
Do you have more details on this story, what was behind Intel suggestion
and what has changed since that time?
As to secure erase, I have seen comments claiming that sometimes it
helps to recovery performance degraded after some period of usage. I
have not tried to collect details if it is related to specific models or
to low end drives.
rhkramer@gmail.com wrote:
On Thursday, July 10, 2025 10:41:18 PM David Christensen wrote:
On 7/10/25 04:07, songbird wrote:
I was able to get some SSD replacements and want to add them
to my existing setup,
Be sure to do a secure erase before you put the SSD's into service:
https://en.wikipedia.org/wiki/Secure_Erase#Secure_erase
Why do you recommend that? Are you assuming the SSDs songbird got are used, >> or do you recommend that even for new SSDs -- if so, why?
beyond that what assurances do you have that with behind the
scenes managment going on of the drive that any attempts at
wiping it completely are actually happening?
aside from the original manufacturer hopefully not putting
backdoors and ET Phone Home sorts of hooks?
i pretty much have always assumed that a new disk drive when
it gets a new partition table and file systems created on it
will be destroyed enough. sometimes i have written random
data on new disks but i have no illusion that this has been
perfect as i know some people who have been able to get a lot
of information from disks that have been somewhat scrubbed
as long as they weren't outright destroyed and the metals
recycled.
songbird
A few suggestions on getting rid of garbage (yours, some hacker's, or Microsoft's):
https://www.tomshardware.com/how-to/secure-erase-ssd-or-hard-drive
Yes, things get very bad when bad people control the SSD firmware. I
can only hope the firmware in my SSD's is legitimate, and updates are cryptographically signed.
When using d-i to initialize a physical volume for encryption, I have
seen the option to fill the volume with random bytes. AIUI 'discard'
and 'trim' would gradually defeat such security-by-obfuscation as blocks
are erased, but it does make sense if the incremental security gain is justified. I don't do it to my SSD's because I want to save their erase cycles.
Please clarify "somewhat scrubbed".
I would expect a new SSD to be securely erased by the factory, but would check this assumption (and do an informal sequential read benchmark):
2025-07-12 12:13:02 root@laalaa ~
# time dd if=/dev/sdb bs=1M | hexdump -C
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|................|
David Christensen wrote:
...
I would expect a new SSD to be securely erased by the factory, but would
check this assumption (and do an informal sequential read benchmark):
2025-07-12 12:13:02 root@laalaa ~
# time dd if=/dev/sdb bs=1M | hexdump -C
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|................|
that one showed all zeroes for the entire SSD.
songbird
`dd if=/dev/zero bs=1M /dev/sdX`
On 7/13/25 13:23, David Christensen wrote:...
`dd if=/dev/zero bs=1M /dev/sdX`
I apologize -- that command is wrong, in more than one way. Here is an console session from when I zeroed a 1 TB HDD:
suggestionand what has changed since that time?I just set 'ssd' in fstab options and leave it.Is it better than fstrim.timer mentioned in this thread?
archlinux.
Some years ago there was a warning on the
<https://wiki.
org/title/Solid_state_drive/NVMe
Currentlythere are some words against discard in
page that Intel did not recommend continuous TRIM aka discard.
<https://wiki.archlinux.
org/title/Solid_state_drive#Continuous_TRIM
.
Do you have more details on this story, what was behind Intel
From my reading (which I unfortunately can't cite and could be wrong)when SSD was a newish technology, file and operating systems had to play catchup.
To my understanding, that's mostly been done, so the default SSDoptions ought to be sufficient.
Like, I had to tell my Windows clients years ago not to defragmenttheir SSD, because it would cause unnecessary writes.
Microsoft has since updated their defragment command not to do that(and they've further renamed it as 'optimise') so now I can tell my
I understand that Linux is much the same, if not a little ahead ofthe curve.
My understanding is that setting constant ftrim and/or discard justhogs bandwidth, as the firmware busily clears writeable memory instead
ithelps to recovery performance degraded after some period of usage.As to secure erase, I have seen comments claiming that sometimes
Ihave not tried to collect details if it is related to specificmodels orto low end drives.
I don't understand that logic.don't want to change a 1 to a 0 if you'll only have to change it back to
I've been told to believe that SSDs have finite write cycles, so you
I've also read that the firmware tries to optimise writes, soabstracts access to the hardware.
Therefore, where you can tell a HDD to "fetch sector X of platter Y,"there's no analogue in an SSD.
The file system just says "fetch me address Z," and the firmwarefigures out which cell of which chip by magic.
Furthermore, I understand that telling an SSD to write a terabyte ofzeros to the drive may not actually write a terabyte of addresses, as
I understand that the only way to wipe an SSD securely is tore-encrypt the drive, as the SSD will then take the protective measures
But, please, I encourage the people here who are smarter than I am tocorrect any of my factual inaccuracies.
don'tsee it any more.Today's SSDs, even consumer brands, have much higher endurance, and thissort of advice is quite complicated and consumer-hostile, so you
I have a fairly inexpensive Crucial 2TB that I do all my living on.boycotting them now.
It was a Samsung but their customer service was horrific so I'm
It runs at about 80% full.
Aside from disabling swap, I use my computer without considering writes.
I regularly download, delete, change my mind, download the same thingagain, write a script that outputs to a file, then run the same command
I even had BOINC running on it until recently.
Nevertheless, I check the SMART readout every so often.
It's down to about 96% health after about 2-3 years of regular use.
At this rate, I'll probably have to replace the drive in about 40 years.
So I'm not losing sleep, and neither should you.
On 2025-07-12, Andy Smith <andy@strugglers.net> wrote:
But for brand new devices I don't care what was on it before.
You can construct a hypothetical situation where:
1. I buy a new storage device but am unwittingly given a refurb one
(that has had its diagnostic attributes erased to maintain the
illusion that it is new).
2. For some reason law enforcement seize my computer, scan the storage
and find something illegal that was on it already in unused space.
https://www.mdpi.com/2076-3417/12/12/5928?utm_source=chatgpt.com
https://www.reddit.com/r/privacy/comments/181241a/amazon_sold_me_a_drive_it_came_with_data_on_it/?utm_source=chatgpt.com
https://indiandefencereview.com/a-man-bought-a-new-hard-drive-but-upon-plugging-it-in-he-discovered-800gb-of-files-worth-thousands/?utm_source=chatgpt.com
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 146:02:16 |
Calls: | 10,383 |
Calls today: | 8 |
Files: | 14,054 |
D/L today: |
2 files (1,861K bytes) |
Messages: | 6,417,687 |