• SDD partitioning and allocations

    From songbird@21:1/5 to All on Thu Jul 10 13:10:01 2025
    hello all, some questions at last... it's been a while. :)

    I was able to get some SSD replacements and want to add them
    to my existing setup, but in previous years I recall that there
    was some recommendation to leave some part of the SSD unallocated
    and not formatted as part of a file system so any parts that
    failed as bad blocks or wore out could be allocated from these
    unused areas.

    When trying to see what current recommendations are for setting
    up SSDs I see no mentions of this at all? Has this changed? I've
    been trying to get caught up and seeing nothing specific to EXT4
    or Linux for SSDs.

    I don't do encryption or raid.

    Pretty much my current plan for one of the SSDs would be
    to put an efi small partition(as I notice the current ones I have
    hardly have anything on them even if they were allocated to be 1G)
    so that I can copy my current setup to that but not waste the
    space). The existing ones use 5M or even much less so perhaps 50M
    will be enough allowing for future expansion? The rest of the
    new drive will just be one large partition. It is not a heavily
    used machine or setup but I do need more space for working on
    the website and picture archives.

    I do have both Grub and Refind installed (for some Grub updates
    it will change my initial efi boot order so I have a script setup
    to change it back when needed). Refind works and does exactly
    what I want.

    Because I do run Debian testing most of the time I also plan
    on keeping my other partition where it boots stable going. I've
    only needed it a few times but I like having it there. I'll
    leave this on the smaller SSD along with the swap partition
    (which is not frequently or heavily used).

    The 2nd new SSD will be for consolidating my backups (that are on
    a smaller SSD at the moment plus also on an external drive that is
    not used frequently - I don't trust it as it has been knocked off
    the table but until it gives up entirely it is a backup that can't
    be messed with as it is not mounted or powered on often).

    I don't use the discard options on the mounts or filesystems
    and also don't run fstrim automatically, I will eventually set
    this up to run monthly. I ran it recently for the first time
    after several years of use of the existing SSDs. I've not noticed
    any decline in the existing SSD speeds, etc. at all but I'm also
    not running too much that is demanding for performance.

    I do like having multiple backups on the different SSDs just
    in case one of them decides to fail on me. No signs of any
    troubles so far.


    songbird

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to songbird on Thu Jul 10 13:40:01 2025
    On Jul 10, 2025, songbird wrote:
    hello all, some questions at last... it's been a while. :)

    I was able to get some SSD replacements and want to add them
    to my existing setup, but in previous years I recall that there
    was some recommendation to leave some part of the SSD unallocated
    and not formatted as part of a file system so any parts that
    failed as bad blocks or wore out could be allocated from these
    unused areas.

    The SSD has this onboard (and, IIRC, always has) -- for example, a 1T
    drive may have 100G of extra blocks you cannot allocate.

    I do like having multiple backups on the different SSDs just
    in case one of them decides to fail on me. No signs of any
    troubles so far.

    Longterm (powered-off) backup is actually better on spinning rust; as
    SSD are somewhat more susceptible to bit-rot when powered off.


    --
    |_|O|_|
    |_|_|O| Github: https://github.com/dpurgert
    |O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860

    -----BEGIN PGP SIGNATURE-----

    iQIzBAEBCgAdFiEE3asj+xn6fYUcweBnbWVw5UznKGAFAmhvpMQACgkQbWVw5Uzn KGA5FQ/+PgmU7x7wvM7NjLrkP/rzIdS/P3ztU5U3FTujlatx5mURCvbJ3JIVlJs6 26EC1jLicC/jfaLjIl0+Oz8mY4FbX+9fUa32a5/uxHqtBBA50sd2rFceOSpjY+2q vae36t5g7A958XzF5yS4/wzJnZ0dKReQHVy8VbEmOoLy/8H+HzTXNZGOiXVRGpRt /v+j4BybkN0oxC28GWnCQTAQV+9gmzTOEmMUvLNjjy5KdRBpVRTUBgEw+9RtkuJr sGyGC6yIugMDosFYoG3SegXN8z8End8vVTsywn2rqIXwH9ybdEq/z2K7AorPTaEc ncaAUUMsLjN+E1UR9DVgkmW9whckIoE/VnuG9tbXArorVW2EY+a14q/T6R6+NHji fmK9/f50NTsHDk5xe6cLPUswknX79KcAs87tNrRF7Sp4GtyhLowIxMk+FbIc3IdE VSDCFaxENe5CW/U76Lr/k40bWKv82Hjf06WPl13PxzRFf0sZixfwPJDKlLhSNaPt PI9Tsg7BsbxAAqf7J/kcIOV123pYEYHK8ZLPJ5+a0tGpxB7RvU9st0847hmUfWQl TCXNtgKHA6vYU2+5uCYs/SFWYMKslHqSzirw/nWUyHmDmbFGWkh6Gc+fduzNUh9Q SBOMHHaCwkj561zivYIaC0cBPtzi+VW2K75tHinAqC2xt0vTP64=
    =/Isc
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Us
  • From songbird@21:1/5 to Dan Purgert on Thu Jul 10 16:00:01 2025
    Dan Purgert wrote:
    On Jul 10, 2025, songbird wrote:
    hello all, some questions at last... it's been a while. :)

    I was able to get some SSD replacements and want to add them
    to my existing setup, but in previous years I recall that there
    was some recommendation to leave some part of the SSD unallocated
    and not formatted as part of a file system so any parts that
    failed as bad blocks or wore out could be allocated from these
    unused areas.

    The SSD has this onboard (and, IIRC, always has) -- for example, a 1T
    drive may have 100G of extra blocks you cannot allocate.

    ah!


    I do like having multiple backups on the different SSDs just
    in case one of them decides to fail on me. No signs of any
    troubles so far.

    Longterm (powered-off) backup is actually better on spinning rust; as
    SSD are somewhat more susceptible to bit-rot when powered off.

    these will always be regularly powered on even if they are
    not mounted or used. the external drive i have is a normal
    spinning rust drive.


    songbird

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to songbird on Thu Jul 10 17:00:01 2025
    Hi,

    On Thu, Jul 10, 2025 at 07:07:03AM -0400, songbird wrote:
    in previous years I recall that there was some recommendation to leave
    some part of the SSD unallocated and not formatted as part of a file
    system so any parts that failed as bad blocks or wore out could be
    allocated from these unused areas.

    The purpose of this was not to reserve space for "bad" blocks as such¹
    but to increase the amount of available space for wear levelling. In
    that a particular flash device has a rating for the volume of writes it
    could expect to endure in its lifetime so if you only ever used half the capacity of the device then you could expect that volume to be roughly
    doubled.

    (It won't be exactly like that because each design will have a varying
    amount of spare capacity hidden from the user for this purpose anyway.)

    This was common advice years ago when flash endurance was relatively low
    and incidences of people wearing out their SSDs were commonplace.

    When trying to see what current recommendations are for setting
    up SSDs I see no mentions of this at all? Has this changed?

    Today's SSDs, even consumer brands, have much higher endurance, and this
    sort of advice is quite complicated and consumer-hostile, so you don't
    see it any more.

    Just don't worry about it unless you have an unusually heavy write load.
    If you do then make sure to take a look at the published endurance
    figures of the particular drive. It will be quoted either in "Terabytes written" (TBW) in its lifetime, or "Drive writes per day" (DWPD) e.g. if
    the drive is 1TB and it is rated for 0.5 DWPD over three years then that
    is about 0.5 * 1TB * 365 * 3 = ~548 TBW.

    You can get figures on how much you're written to flash using SMART or nvme-cli, often even from conventional HDDs these days. You can use
    blktrace to measure it on an ongoing realtime basis.

    For example this is one of the Samsung 850 EVO SSDs that is in my
    desktop computer, which I use almost every day:

    $ sudo smartctl -j -A /dev/sda | \
    jq '.ata_smart_attributes.table[] |
    select(.name=="Power_On_Hours"
    or .name=="Wear_Leveling_Count"
    or .name=="Total_LBAs_Written") |
    .name, .value, .raw.value'
    "Power_On_Hours"
    86
    66088
    "Wear_Leveling_Count"
    91
    176
    "Total_LBAs_Written"
    99
    120847986783
    $ units '660878 hours' 'years'
    * 7.5392895

    So this drive has been powered on for over 7½ years, still is at 91%
    write endurance remaining. An LBA on this drive is 512 bytes so it's
    written…

    $ echo "scale=3; 120847986783 * 512 / 1024 / 1024 / 1024 / 1024" | bc
    56.274

    …TiB.

    (Just do "smartcl -A /dev/blah" to see all the attributes without the
    JSON output I used just to make it presentable in this email.)

    Pretty much my current plan for one of the SSDs would be
    to put an efi small partition(as I notice the current ones I have
    hardly have anything on them even if they were allocated to be 1G)
    so that I can copy my current setup to that but not waste the
    space). The existing ones use 5M or even much less so perhaps 50M
    will be enough allowing for future expansion?

    The recommended size of an EFI SYstem Partition (ESP) is up for debate
    and is not related to what kind of drive you put it on:

    https://wiki.debian.org/UEFI#EFI_System_Partition_.28ESP.29_recommended_size

    The rest of the new drive will just be one large partition.

    RAID is worth it so as not to have to stop working to reinstall from
    backups.

    The 2nd new SSD will be for consolidating my backups (that are on
    a smaller SSD at the moment plus also on an external drive that is
    not used frequently - I don't trust it as it has been knocked off
    the table but until it gives up entirely it is a backup that can't
    be messed with as it is not mounted or powered on often).

    SSDs have no moving parts so withstand sudden impacts a lot better than
    HDDs do. It's probably fine.

    I don't use the discard options on the mounts or filesystems
    and also don't run fstrim automatically, I will eventually set
    this up to run monthly.

    fstrim runs by default on all Debian installs for years so you must have
    gone out of your way to disable this. Why?

    $ systemctl status fstrim.timer
    ● fstrim.timer - Discard unused blocks once a week
    Loaded: loaded (/lib/systemd/system/fstrim.timer; enabled; preset: enabled)
    Active: active (waiting) since Fri 2025-06-13 00:38:26 BST; 3 weeks 6 days ago
    Trigger: Mon 2025-07-14 00:22:50 BST; 3 days left
    Triggers: ● fstrim.service
    Docs: man:fstrim

    Thanks,
    Andy

    ¹ It is perhaps slightly philosophical whether a memory cell is "bad"
    because it has worn out through doing an amount of writes that it was
    expected to do. In practice you can't write to it any more, but I
    don't consider that a fault. Without more context, if someone calls a
    cell bad then I think of it as becoming unexpectedly faulty.

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to songbird on Thu Jul 10 17:10:01 2025
    Hi,

    On Thu, Jul 10, 2025 at 08:58:10AM -0400, songbird wrote:
    Dan Purgert wrote:
    Longterm (powered-off) backup is actually better on spinning rust; as
    SSD are somewhat more susceptible to bit-rot when powered off.

    these will always be regularly powered on even if they are
    not mounted or used. the external drive i have is a normal
    spinning rust drive.

    It's on the order of 6 months or more of sitting unpowered on a shelf at
    the moment for the types of NAND memory used in SSDs, by the way.

    I don't think this aspect will improve as the focus is on getting higher densities of storage, e.g. right now you can buy 122TB NVMe drives (for
    about $12,500 USD each).

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to songbird on Fri Jul 11 04:50:01 2025
    On 7/10/25 04:07, songbird wrote:
    hello all, some questions at last... it's been a while. :)

    I was able to get some SSD replacements and want to add them
    to my existing setup,


    Be sure to do a secure erase before you put the SSD's into service:

    https://en.wikipedia.org/wiki/Secure_Erase#Secure_erase


    but in previous years I recall that there
    was some recommendation to leave some part of the SSD unallocated
    and not formatted as part of a file system so any parts that
    failed as bad blocks or wore out could be allocated from these
    unused areas.

    When trying to see what current recommendations are for setting
    up SSDs I see no mentions of this at all? Has this changed? I've
    been trying to get caught up and seeing nothing specific to EXT4
    or Linux for SSDs.


    AIUI SSD over-provisioning combined with setting the discard flag in
    fstab(5) provides maximum performance for write intensive workloads. If
    you need that, I would suggest a dedicated SSD. For a typical graphical desktop workload, I would not worry about over-provisioning; allocate
    SSD space as needed.


    I don't do encryption or raid.

    Pretty much my current plan for one of the SSDs would be
    to put an efi small partition(as I notice the current ones I have
    hardly have anything on them even if they were allocated to be 1G)
    so that I can copy my current setup to that but not waste the
    space). The existing ones use 5M or even much less so perhaps 50M
    will be enough allowing for future expansion? The rest of the
    new drive will just be one large partition. It is not a heavily
    used machine or setup but I do need more space for working on
    the website and picture archives.

    I do have both Grub and Refind installed (for some Grub updates
    it will change my initial efi boot order so I have a script setup
    to change it back when needed). Refind works and does exactly
    what I want.

    Because I do run Debian testing most of the time I also plan
    on keeping my other partition where it boots stable going. I've
    only needed it a few times but I like having it there. I'll
    leave this on the smaller SSD along with the swap partition
    (which is not frequently or heavily used).


    I agree that 1 GB for the ESP seems like overkill, but I would rather
    error too large than too small.


    Rather than fighting the complexities of dual-boot/ multi-boot, I prefer
    to install drive racks in my computers and to put each OS instance on
    its own drive. This reduces the complexity to BIOS/UEFI Setup settings.


    The 2nd new SSD will be for consolidating my backups (that are on
    a smaller SSD at the moment plus also on an external drive that is
    not used frequently - I don't trust it as it has been knocked off
    the table but until it gives up entirely it is a backup that can't
    be messed with as it is not mounted or powered on often).


    I would use the small SSD for Debian and the second new SSD for data.
    See below for backups.


    I don't use the discard options on the mounts or filesystems
    and also don't run fstrim automatically, I will eventually set
    this up to run monthly. I ran it recently for the first time
    after several years of use of the existing SSDs. I've not noticed
    any decline in the existing SSD speeds, etc. at all but I'm also
    not running too much that is demanding for performance.


    I run fstrim(8) monthly on my Debian SSD's before taking an image.
    Zeroes compress nicely and the drive will have plenty of erased blocks
    for the next month.


    I do like having multiple backups on the different SSDs just
    in case one of them decides to fail on me. No signs of any
    troubles so far.


    songbird


    For a single desktop, I would connect the external drive when making
    backups, disconnect it when not, and store it near-site. It would be
    best to buy another external drive and implement near-site/ off-site
    rotation.


    A related subject to consider is archiving -- burning backups to
    write-once discs periodically -- CD-R, DVD-R, BD-R, DL, XL, etc..


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From songbird@21:1/5 to Andy Smith on Fri Jul 11 19:30:02 2025
    Andy Smith wrote:
    Hi,

    On Thu, Jul 10, 2025 at 07:07:03AM -0400, songbird wrote:
    ...
    When trying to see what current recommendations are for setting
    up SSDs I see no mentions of this at all? Has this changed?
    ...
    Just don't worry about it unless you have an unusually heavy write load.

    it isn't much IMO. i do certain things to keep my
    system tidy.


    ...
    (Just do "smartcl -A /dev/blah" to see all the attributes without the
    JSON output I used just to make it presentable in this email.)

    things look ok with my existing SSDs.


    ...efi...
    The recommended size of an EFI SYstem Partition (ESP) is up for debate
    and is not related to what kind of drive you put it on:

    https://wiki.debian.org/UEFI#EFI_System_Partition_.28ESP.29_recommended_size

    ah ok, tks.


    The rest of the new drive will just be one large partition.

    RAID is worth it so as not to have to stop working to reinstall from
    backups.

    i have stable partition on different device so not
    expecting any downtime if i really need to get back to
    a reliable state.


    The 2nd new SSD will be for consolidating my backups (that are on
    a smaller SSD at the moment plus also on an external drive that is
    not used frequently - I don't trust it as it has been knocked off
    the table but until it gives up entirely it is a backup that can't
    be messed with as it is not mounted or powered on often).

    SSDs have no moving parts so withstand sudden impacts a lot better than
    HDDs do. It's probably fine.

    the external drive is the spinning rust kind so yes i am
    a bit more worried about it, plus there's some kind of loose
    rattling part inside (so i won't be shaking it any more if i
    can help it).


    I don't use the discard options on the mounts or filesystems
    and also don't run fstrim automatically, I will eventually set
    this up to run monthly.

    fstrim runs by default on all Debian installs for years so you must have
    gone out of your way to disable this. Why?

    because i didn't want it to run. i do not want automatic
    much of anything so i turn them off when i can find them.
    there's quite a long list now of stuff i hated when i found
    out about it and turned it off and a few i'd like to turn
    off but can't without uninstalling most of my system. :(

    as i said i will probably eventually turn it back on for
    a once a month run, but that should be plenty for my use
    case.

    thanks,


    songbird

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Me@21:1/5 to rhkramer@gmail.com on Sat Jul 12 16:40:01 2025
    On 2025-07-12 15:19, rhkramer@gmail.com wrote:

    Why do you recommend that? Are you assuming the SSDs songbird got are
    used, or do you recommend that even for new SSDs -- if so, why?
    Not the OP, but you never know what's on the disks. It wouldn't be the
    first time new disks contain unwanted "presents" straight from the
    factory. And old disks can contain all sorts of questionable stuff. You
    really don't want to know what I have found over the years when
    inspecting storage devices. Not only in the form of malware and such.
    People can sometimes have the weirdest preferences I really don't need
    to see (but had too to do my job). Luckily I don't have to anymore.

    Grx HdV

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to All on Sat Jul 12 17:00:01 2025
    Hi,

    On Sat, Jul 12, 2025 at 03:38:11PM +0200, Me wrote:
    On 2025-07-12 15:19, rhkramer@gmail.com wrote:

    Why do you recommend that? Are you assuming the SSDs songbird got are
    used, or do you recommend that even for new SSDs -- if so, why?
    Not the OP, but you never know what's on the disks. It wouldn't be the first time new disks contain unwanted "presents" straight from the factory.

    But for brand new devices I don't care what was on it before.

    You can construct a hypothetical situation where:

    1. I buy a new storage device but am unwittingly given a refurb one
    (that has had its diagnostic attributes erased to maintain the
    illusion that it is new).
    2. For some reason law enforcement seize my computer, scan the storage
    and find something illegal that was on it already in unused space.

    I personally don't regard that a possibility worth worrying about, but
    okay for anyone that does, yes they would want to secure erase their
    storage. For NVMe they would want to be sure to select Secure Erase
    Setting 1 or 2.

    https://manpages.debian.org/bookworm/nvme-cli/nvme-format.1.en.html

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From tomas@tuxteam.de@21:1/5 to Jeffrey Walton on Sat Jul 12 21:20:01 2025
    On Sat, Jul 12, 2025 at 01:03:23PM -0400, Jeffrey Walton wrote:
    On Sat, Jul 12, 2025 at 12:14 PM <rhkramer@gmail.com> wrote:

    On Thursday, July 10, 2025 10:41:18 PM David Christensen wrote:

    On 7/10/25 04:07, songbird wrote:
    [...]
    Be sure to do a secure erase before you put the SSD's into service:

    https://en.wikipedia.org/wiki/Secure_Erase#Secure_erase

    Why do you recommend that? Are you assuming the SSDs songbird got are used, or do you recommend that even for new SSDs -- if so, why?

    From <https://www.zdnet.com/article/malware-found-on-new-hard-drives/>:

    ... Practice "safe sectors" and scan, or preferably wipe, all drives
    before bringing them into the ecosystem. Dont assume that a drive is
    going to be blank and malware free. Trust no one. Same goes for USB
    flash drives - you never know what's been installed on them.

    I have a hard time imagining how a malware on a disk can do
    anything once you've put new file systems on it.

    Of course, if you mount their file systems unchanged...

    Cheers
    --
    t

    -----BEGIN PGP SIGNATURE-----

    iF0EABECAB0WIQRp53liolZD6iXhAoIFyCz1etHaRgUCaHKzrgAKCRAFyCz1etHa RpwBAJ9VeF8fI33S8GlwyoML1JtRhO7/dgCcDoRwAp32sOR2CzglkoPZ31Z0Mh8=
    =nqC+
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to rhkramer@gmail.com on Sat Jul 12 21:30:02 2025
    On 7/12/25 06:19, rhkramer@gmail.com wrote:
    On Thursday, July 10, 2025 10:41:18 PM David Christensen wrote:
    On 7/10/25 04:07, songbird wrote:
    I was able to get some SSD replacements and want to add them

    to my existing setup,

    Be sure to do a secure erase before you put the SSD's into service:

    https://en.wikipedia.org/wiki/Secure_Erase#Secure_erase

    Why do you recommend that? Are you assuming the SSDs songbird got are used, or do you recommend that even for new SSDs -- if so, why?

    Thanks!


    1. Remove any and all data for security and legal/ liability reasons.
    Only secure erase can erase factory over-provisioned and hidden blocks.

    2. Provide a starting condition for maximum write performance (with
    subsequent fstab(5) 'discard' and/or fstrim(8)).


    I would expect a new SSD to be securely erased by the factory, but would
    check this assumption (and do an informal sequential read benchmark):

    2025-07-12 12:13:02 root@laalaa ~
    # time dd if=/dev/sdb bs=1M | hexdump -C
    00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    |................|
    00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    |................|
    *
    df99e6000
    57241+1 records in
    57241+1 records out
    60022480896 bytes (60 GB, 56 GiB) copied, 204.361 s, 294 MB/s

    real 3m24.366s
    user 1m25.872s
    sys 1m17.036s


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From songbird@21:1/5 to rhkramer@gmail.com on Sun Jul 13 07:00:01 2025
    rhkramer@gmail.com wrote:
    On Thursday, July 10, 2025 10:41:18 PM David Christensen wrote:
    On 7/10/25 04:07, songbird wrote:
    I was able to get some SSD replacements and want to add them

    to my existing setup,

    Be sure to do a secure erase before you put the SSD's into service:

    https://en.wikipedia.org/wiki/Secure_Erase#Secure_erase

    Why do you recommend that? Are you assuming the SSDs songbird got are used, or do you recommend that even for new SSDs -- if so, why?

    Thanks!

    beyond that what assurances do you have that with behind the
    scenes managment going on of the drive that any attempts at
    wiping it completely are actually happening?

    aside from the original manufacturer hopefully not putting
    backdoors and ET Phone Home sorts of hooks?

    i pretty much have always assumed that a new disk drive when
    it gets a new partition table and file systems created on it
    will be destroyed enough. sometimes i have written random
    data on new disks but i have no illusion that this has been
    perfect as i know some people who have been able to get a lot
    of information from disks that have been somewhat scrubbed
    as long as they weren't outright destroyed and the metals
    recycled.


    songbird

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From tomas@tuxteam.de@21:1/5 to Jeffrey Walton on Sun Jul 13 08:10:01 2025
    On Sat, Jul 12, 2025 at 05:10:13PM -0400, Jeffrey Walton wrote:
    On Sat, Jul 12, 2025 at 3:12 PM <tomas@tuxteam.de> wrote:

    On Sat, Jul 12, 2025 at 01:03:23PM -0400, Jeffrey Walton wrote:
    On Sat, Jul 12, 2025 at 12:14 PM <rhkramer@gmail.com> wrote:

    On Thursday, July 10, 2025 10:41:18 PM David Christensen wrote:

    On 7/10/25 04:07, songbird wrote:
    [...]
    Be sure to do a secure erase before you put the SSD's into service:

    https://en.wikipedia.org/wiki/Secure_Erase#Secure_erase

    Why do you recommend that? Are you assuming the SSDs songbird got are used, or do you recommend that even for new SSDs -- if so, why?

    From <https://www.zdnet.com/article/malware-found-on-new-hard-drives/>:

    ... Practice "safe sectors" and scan, or preferably wipe, all drives
    before bringing them into the ecosystem. Dont assume that a drive is
    going to be blank and malware free. Trust no one. Same goes for USB
    flash drives - you never know what's been installed on them.

    I have a hard time imagining how a malware on a disk can do
    anything once you've put new file systems on it.

    Of course, if you mount their file systems unchanged...

    I suspect it is a bigger problem on WIndows, which most malware is
    written for and where derives get automounted on insertion: <https://en.wikipedia.org/wiki/2008_malware_infection_of_the_United_States_Department_of_Defense>.

    See above: *if* the first thing you do is to make a new file
    system, the malware data will still be there, in the free
    blocks, but your (second-rate) operating system won't be able
    to access it.

    But I don't think it is limited to Windows. I recall a recent thread
    about maliciously corrupt filesystems affecting Linux: <https://www.openwall.com/lists/oss-security/2025/06/03/2>. The kernel
    would not fix it because they said users should not mount a corrupt filesystem. Ubuntu had to create and apply patches because of
    automounting for users.

    Again: irrelevant for a freshly made file system.

    Now if your operating system tries to mount everything it is presented
    with, that's another problem (mine doesn't).

    Cheers
    --
    t

    -----BEGIN PGP SIGNATURE-----

    iF0EABECAB0WIQRp53liolZD6iXhAoIFyCz1etHaRgUCaHNMWgAKCRAFyCz1etHa RkrYAJ9W+FnBxgnP9gxTjgfirIDRGA/hLwCfbMHuf82ttOAK8huzQBDC2lGA1Qk=
    =p8lE
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From ghe2001@21:1/5 to All on Sun Jul 13 09:40:01 2025
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    A few suggestions on getting rid of garbage (yours, some hacker's, or Microsoft's):

    https://www.tomshardware.com/how-to/secure-erase-ssd-or-hard-drive

    --
    Glenn English
    -----BEGIN PGP SIGNATURE-----
    Version: ProtonMail

    wsC5BAEBCgBtBYJoc2HtCZCf14YxgqyMMkUUAAAAAAAcACBzYWx0QG5vdGF0 aW9ucy5vcGVucGdwanMub3JnB1LpxaO0WOJ85u8wNF+UBWbnpup7G+JMi5gy BbRDTukWIQQsonMPQlJwJWNADZef14YxgqyMMgAAQUEH/2w9OVYbgzsAJjUS ebm/cRGPJBBU571Mk+uX43uO2yr/zY3x8G1tId8pdzBDNfdcrw3j7zEU7mun /PG7csp7RZXrqyJDZ34yIKbmPU9//Aa+IH5nSt/zj61PCfTGxc9sJZb0zdYJ 7e4hBGMU3TMR8wO4PU9X9nMDPGSquCbI7bQGFK9WWZQYAkiQMBOLAeq1CNkA 4DJ6UC7q52Y9myPhj3YBkI2RMSyHXcf/ALpMR2IS0xiB1X3eUyeDi06RVMFK kRmjypHUkrI5v90v4EA6c+FalXwIscAywVz1JtyhmPfBLsOL5ubNg5pzSyL+ MAsYOPNTJiYpY45caoK28mla7IE=
    =35N/
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to Max Nikulin on Sun Jul 13 09:20:01 2025
    On 7/12/25 20:33, Max Nikulin wrote:
    On 11/07/2025 09:41, David Christensen wrote:
    AIUI SSD over-provisioning combined with setting the discard flag in
    fstab(5) provides maximum performance for write intensive workloads.

    Is it better than fstrim.timer mentioned in this thread?

    Some years ago there was a warning on the <https://wiki.archlinux.org/title/Solid_state_drive/NVMe>
    page that Intel did not recommend continuous TRIM aka discard. Currently there are some words against discard in <https://wiki.archlinux.org/title/Solid_state_drive#Continuous_TRIM>.

    Do you have more details on this story, what was behind Intel suggestion
    and what has changed since that time?

    As to secure erase, I have seen comments claiming that sometimes it
    helps to recovery performance degraded after some period of usage. I
    have not tried to collect details if it is related to specific models or
    to low end drives.


    I seem to recall enabling 'discard' at some point in the past and not
    noticing any difference. But I was not, and still am not, doing write intensive workloads.


    I do recall reading articles that warned against 'discard' (10 years
    ago?). I am not sure if I stopped using 'discard' because of those,
    because I reinstalled Debian, the Debian installer left out 'discard',
    and I never enabled it, or both. AFAICT I do not need 'discard', so I
    leave it disabled.


    I was not aware of systemd fstrim.timer. STFW it looks like it is
    enabled on my daily driver:

    2025-07-12 23:56:19 root@laalaa ~
    # cat /etc/debian_version ; uname -a
    11.11
    Linux laalaa 5.10.0-35-amd64 #1 SMP Debian 5.10.237-1 (2025-05-19)
    x86_64 GNU/Linux

    2025-07-13 00:01:43 root@laalaa ~
    # locate fstrim.timer
    /etc/systemd/system/timers.target.wants/fstrim.timer /usr/lib/systemd/system/fstrim.timer /var/lib/systemd/deb-systemd-helper-enabled/fstrim.timer.dsh-also /var/lib/systemd/deb-systemd-helper-enabled/timers.target.wants/fstrim.timer /var/lib/systemd/timers/stamp-fstrim.timer

    2025-07-13 00:01:45 root@laalaa ~
    # systemctl status fstrim.timer
    * fstrim.timer - Discard unused blocks once a week
    Loaded: loaded (/lib/systemd/system/fstrim.timer; enabled; vendor
    preset: >
    Active: active (waiting) since Sat 2025-07-12 21:55:50 PDT; 2h
    5min ago
    Trigger: Mon 2025-07-14 00:04:12 PDT; 24h left
    Triggers: * fstrim.service
    Docs: man:fstrim

    Jul 12 21:55:50 laalaa systemd[1]: Started Discard unused blocks once a
    week.


    SSD's are complex devices with many features. I expect the only way to
    know which of 'discard', fstrim.timer, or running fstrim(8) is "better"
    is to define your metrics, devise benchmarks for your actual workloads,
    and collect the data.


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to songbird on Sun Jul 13 10:00:02 2025
    On 7/12/25 21:46, songbird wrote:
    rhkramer@gmail.com wrote:
    On Thursday, July 10, 2025 10:41:18 PM David Christensen wrote:
    On 7/10/25 04:07, songbird wrote:
    I was able to get some SSD replacements and want to add them
    to my existing setup,

    Be sure to do a secure erase before you put the SSD's into service:

    https://en.wikipedia.org/wiki/Secure_Erase#Secure_erase

    Why do you recommend that? Are you assuming the SSDs songbird got are used, >> or do you recommend that even for new SSDs -- if so, why?

    beyond that what assurances do you have that with behind the
    scenes managment going on of the drive that any attempts at
    wiping it completely are actually happening?

    aside from the original manufacturer hopefully not putting
    backdoors and ET Phone Home sorts of hooks?

    i pretty much have always assumed that a new disk drive when
    it gets a new partition table and file systems created on it
    will be destroyed enough. sometimes i have written random
    data on new disks but i have no illusion that this has been
    perfect as i know some people who have been able to get a lot
    of information from disks that have been somewhat scrubbed
    as long as they weren't outright destroyed and the metals
    recycled.


    songbird


    Yes, things get very bad when bad people control the SSD firmware. I
    can only hope the firmware in my SSD's is legitimate, and updates are cryptographically signed.


    When using d-i to initialize a physical volume for encryption, I have
    seen the option to fill the volume with random bytes. AIUI 'discard'
    and 'trim' would gradually defeat such security-by-obfuscation as blocks
    are erased, but it does make sense if the incremental security gain is justified. I don't do it to my SSD's because I want to save their erase cycles.


    Please clarify "somewhat scrubbed".


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From songbird@21:1/5 to All on Sun Jul 13 17:50:01 2025
    ghe2001 wrote:

    A few suggestions on getting rid of garbage (yours, some hacker's, or Microsoft's):

    https://www.tomshardware.com/how-to/secure-erase-ssd-or-hard-drive

    all good to know thanks.


    songbird

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From songbird@21:1/5 to David Christensen on Sun Jul 13 17:50:01 2025
    David Christensen wrote:
    ...
    Yes, things get very bad when bad people control the SSD firmware. I
    can only hope the firmware in my SSD's is legitimate, and updates are cryptographically signed.


    When using d-i to initialize a physical volume for encryption, I have
    seen the option to fill the volume with random bytes. AIUI 'discard'
    and 'trim' would gradually defeat such security-by-obfuscation as blocks
    are erased, but it does make sense if the incremental security gain is justified. I don't do it to my SSD's because I want to save their erase cycles.


    Please clarify "somewhat scrubbed".

    in older days when devices were larger and you had onsite
    engineers who knew what they were doing they could change
    the heads position to pick up magnetic patterns from the
    disk.

    SSDs don't have the floating head position stuff to
    fiddle with but they do have badblocks and overprovisioning
    that perhaps can leak information that wasn't intended on
    being available, but i admit i'm not at all current on any
    of this.

    i don't plan on doing anything other than creating the
    new partition table and file systems and i'm not doing
    anything on this machine i consider top security or needing
    zero after writing or any other kind of random or encrypted
    overhead. i rarely run programs i download other than the
    ones i've written or ones supplied by Debian packages.

    Samsung doesn't have a utility for Linux so i don't have
    any current tools figured out yet for anything other than
    what i've used before (fdisk, parted, dd).

    i'm still assuming that if i want to zero a disk (or
    parts of it) or to write random info i can use dd on
    the entire device (and /dev/zero and/or /dev/random or
    similar).

    out of habit i normally use sync to make sure buffers
    are written and not waiting via the io cache before i do
    anything that might affect the file systems or backups.


    songbird

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From songbird@21:1/5 to David Christensen on Sun Jul 13 18:40:01 2025
    David Christensen wrote:
    ...
    I would expect a new SSD to be securely erased by the factory, but would check this assumption (and do an informal sequential read benchmark):

    2025-07-12 12:13:02 root@laalaa ~
    # time dd if=/dev/sdb bs=1M | hexdump -C
    00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    |................|

    that one showed all zeroes for the entire SSD.


    songbird

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to songbird on Sun Jul 13 22:30:01 2025
    On 7/13/25 09:29, songbird wrote:
    David Christensen wrote:
    ...
    I would expect a new SSD to be securely erased by the factory, but would
    check this assumption (and do an informal sequential read benchmark):

    2025-07-12 12:13:02 root@laalaa ~
    # time dd if=/dev/sdb bs=1M | hexdump -C
    00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    |................|

    that one showed all zeroes for the entire SSD.


    songbird


    Yes.


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to David Christensen on Mon Jul 14 04:00:01 2025
    On 7/13/25 13:23, David Christensen wrote:
    `dd if=/dev/zero bs=1M /dev/sdX`

    I apologize -- that command is wrong, in more than one way. Here is an
    console session from when I zeroed a 1 TB HDD:

    1. Find the number of sectors:

    2024-11-28 13:59:57 root@bullseye-bios ~
    # parted /dev/disk/by-id/ata-TOSHIBA_DT01ACA100_***REDACTED*** u s p free Model: ATA TOSHIBA DT01ACA1 (scsi)
    Disk /dev/sdc: 1953525168s
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    Disk Flags:

    Number Start End Size File system Name
    Flags
    1 34s 32767s 32734s Microsoft
    reserved partition msftres
    2 32768s 1953521663s 1953488896s ntfs Basic data partition msftdata
    1953521664s 1953525134s 3471s Free Space

    2. Factor the number of sectors:

    2024-11-28 14:00:40 root@bullseye-bios ~
    # factor 1953525168
    1953525168: 2 2 2 2 3 3 3 7 547 1181

    3. Find a suitable block size and count so that the zero-fill is exact
    with no fractional blocks:

    2024-11-28 14:01:51 root@bullseye-bios ~
    # perl -e 'print 512*2*2*2*2*3*3*3*7, $/, 547*1181, $/'
    1548288
    646007

    4. Zero-fill, making sure that input blocks are full:

    2024-11-28 14:02:16 root@bullseye-bios ~
    # time dd bs=1548288 count=646007 if=/dev/zero of=/dev/disk/by-id/ata-TOSHIBA_DT01ACA100_***REDACTED*** status=progress iflag=fullblock
    1000177016832 bytes (1.0 TB, 931 GiB) copied, 6414 s, 156 MB/s
    646007+0 records in
    646007+0 records out
    1000204886016 bytes (1.0 TB, 932 GiB) copied, 6433.63 s, 155 MB/s

    real 107m13.633s
    user 0m0.000s
    sys 6m36.420s


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From songbird@21:1/5 to David Christensen on Mon Jul 14 12:40:01 2025
    David Christensen wrote:
    On 7/13/25 13:23, David Christensen wrote:
    `dd if=/dev/zero bs=1M /dev/sdX`

    I apologize -- that command is wrong, in more than one way. Here is an console session from when I zeroed a 1 TB HDD:
    ...

    i didn't zero mine out since it was already zeroed. but
    thank you for the effort as maybe someone else will see
    your post. :)

    at the moment i'm waiting for some new cables as several
    of mine are bent at the end and i can't plug in all that i
    need.


    songbird

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Borden@21:1/5 to All on Mon Jul 14 20:40:01 2025
    My unsolicited, unprofessionl, free advice:> Is it better than fstrim.timer mentioned in this thread?> > Some years ago there was a warning on the> <https://wiki.archlinux.org/title/Solid_state_drive/NVMe>> > page that Intel did not recommend continuous
    TRIM aka discard. Currentlythere are some words against discard in> <https://wiki.archlinux.org/title/Solid_state_drive#Continuous_TRIM>.>>> Do you have more details on this story, what was behind Intel suggestionand what has changed since that time?I
    just set 'ssd' in fstab options and leave it. From my reading (which I unfortunately can't cite and could be wrong) when SSD was a newish technology, file and operating systems had to play catchup. To my understanding, that's mostly been done, so the
    default SSD options ought to be sufficient.Like, I had to tell my Windows clients years ago not to defragment their SSD, because it would cause unnecessary writes. Microsoft has since updated their defragment command not to do that (and they've further
    renamed it as 'optimise') so now I can tell my clients to optimise however often they like. I understand that Linux is much the same, if not a little ahead of the curve.My understanding is that setting constant ftrim and/or discard just hogs bandwidth,
    as the firmware busily clears writeable memory instead of doing what you want it to do.> As to secure erase, I have seen comments claiming that sometimes ithelps to recovery performance degraded after some period of usage. Ihave not tried to collect
    details if it is related to specific models orto low end drives.I don't understand that logic. I've been told to believe that SSDs have finite write cycles, so you don't want to change a 1 to a 0 if you'll only have to change it back to a 1 again when
    you save something. I've also read that the firmware tries to optimise writes, so abstracts access to the hardware. Therefore, where you can tell a HDD to "fetch sector X of platter Y," there's no analogue in an SSD. The file system just says "fetch me
    address Z," and the firmware figures out which cell of which chip by magic.Furthermore, I understand that telling an SSD to write a terabyte of zeros to the drive may not actually write a terabyte of addresses, as the SSD firmware will try to economise
    writes to reduce wear. I understand that the only way to wipe an SSD securely is to re-encrypt the drive, as the SSD will then take the protective measures to make the data inaccessible without a password.But, please, I encourage the people here who are
    smarter than I am to correct any of my factual inaccuracies.> Today's SSDs, even consumer brands, have much higher endurance, and thissort of advice is quite complicated and consumer-hostile, so you don'tsee it any more.I have a fairly inexpensive
    Crucial 2TB that I do all my living on. It was a Samsung but their customer service was horrific so I'm boycotting them now. It runs at about 80% full. Aside from disabling swap, I use my computer without considering writes. I regularly download, delete,
    change my mind, download the same thing again, write a script that outputs to a file, then run the same command dozens of times as I debug it, and so on. I even had BOINC running on it until recently. Nevertheless, I check the SMART readout every so
    often. It's down to about 96% health after about 2-3 years of regular use. At this rate, I'll probably have to replace the drive in about 40 years. So I'm not losing sleep, and neither should you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to the SSD firmware will try to econom on Mon Jul 14 22:30:01 2025
    Somehow, the newlines in your post got lost (?).


    On 7/14/25 11:12, Borden wrote:

    2025-07-14 11:37:17 dpchrist@laalaa ~
    $ perl -pe 's/>/\n>>/g' foo | perl -pe 's/\. ?/.\n> /g'
    My unsolicited, unprofessionl, free advice:
    Is it better than fstrim.
    timer mentioned in this thread?

    Some years ago there was a warning on the
    <https://wiki.
    archlinux.
    org/title/Solid_state_drive/NVMe


    page that Intel did not recommend continuous TRIM aka discard.
    Currentlythere are some words against discard in
    <https://wiki.
    archlinux.
    org/title/Solid_state_drive#Continuous_TRIM
    .



    Do you have more details on this story, what was behind Intel
    suggestionand what has changed since that time?I just set 'ssd' in fstab options and leave it.
    From my reading (which I unfortunately can't cite and could be wrong)
    when SSD was a newish technology, file and operating systems had to play catchup.
    To my understanding, that's mostly been done, so the default SSD
    options ought to be sufficient.
    Like, I had to tell my Windows clients years ago not to defragment
    their SSD, because it would cause unnecessary writes.
    Microsoft has since updated their defragment command not to do that
    (and they've further renamed it as 'optimise') so now I can tell my
    clients to optimise however often they like.
    I understand that Linux is much the same, if not a little ahead of
    the curve.
    My understanding is that setting constant ftrim and/or discard just
    hogs bandwidth, as the firmware busily clears writeable memory instead
    of doing what you want it to do.


    I expect that depends upon the drive hardware and firmware; notably
    parallelism -- can the drive controller tell the hardware to read one
    block, write another, and erase a third, all in parallel? Two in
    parallel? Or, one at a time (sequential)? Assuming operations are
    queued, what are the priority policies? STFW "ssd architecture"
    produces some interesting articles:

    https://html.duckduckgo.com/html?q=ssd%20architecture


    As to secure erase, I have seen comments claiming that sometimes
    ithelps to recovery performance degraded after some period of usage.
    Ihave not tried to collect details if it is related to specific
    models orto low end drives.


    I have some old Samsung UM410 16 GB SSD's. AFAICT they do not support
    discard or trim.


    I don't understand that logic.
    I've been told to believe that SSDs have finite write cycles, so you
    don't want to change a 1 to a 0 if you'll only have to change it back to
    a 1 again when you save something.
    I've also read that the firmware tries to optimise writes, so
    abstracts access to the hardware.
    Therefore, where you can tell a HDD to "fetch sector X of platter Y,"
    there's no analogue in an SSD.
    The file system just says "fetch me address Z," and the firmware
    figures out which cell of which chip by magic.
    Furthermore, I understand that telling an SSD to write a terabyte of
    zeros to the drive may not actually write a terabyte of addresses, as
    the SSD firmware will try to economise writes to reduce wear.
    I understand that the only way to wipe an SSD securely is to
    re-encrypt the drive, as the SSD will then take the protective measures
    to make the data inaccessible without a password.
    But, please, I encourage the people here who are smarter than I am to
    correct any of my factual inaccuracies.

    Today's SSDs, even consumer brands, have much higher endurance, and thissort of advice is quite complicated and consumer-hostile, so you
    don'tsee it any more.
    I have a fairly inexpensive Crucial 2TB that I do all my living on.
    It was a Samsung but their customer service was horrific so I'm
    boycotting them now.
    It runs at about 80% full.
    Aside from disabling swap, I use my computer without considering writes.


    It has been a while, but I found that disabling swap with Linux caused
    the desktop to fall apart under memory pressure as processes were killed
    off. So, now I allocate a 1 GB swap partition.


    I regularly download, delete, change my mind, download the same thing
    again, write a script that outputs to a file, then run the same command
    dozens of times as I debug it, and so on.
    I even had BOINC running on it until recently.
    Nevertheless, I check the SMART readout every so often.
    It's down to about 96% health after about 2-3 years of regular use.
    At this rate, I'll probably have to replace the drive in about 40 years.
    So I'm not losing sleep, and neither should you.


    The consensus appears to be that continuous erasure (e.g. fstab(5)
    "discard" option) is undesirable and that periodic erasure (e.g.
    fstrim.timer or fstrim(8)) is preferred.


    And, there are plenty of other possibilities if you care to STFW,
    including fstab(5) / mount(8) "noatime", "nodiratime", "relatime", and "commit", RAM disks, sync tools, etc..


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to Greg on Wed Jul 16 21:30:02 2025
    Hi,

    On Wed, Jul 16, 2025 at 04:31:08PM -0000, Greg wrote:
    On 2025-07-12, Andy Smith <andy@strugglers.net> wrote:
    But for brand new devices I don't care what was on it before.

    You can construct a hypothetical situation where:

    1. I buy a new storage device but am unwittingly given a refurb one
    (that has had its diagnostic attributes erased to maintain the
    illusion that it is new).
    2. For some reason law enforcement seize my computer, scan the storage
    and find something illegal that was on it already in unused space.

    https://www.mdpi.com/2076-3417/12/12/5928?utm_source=chatgpt.com

    This one concerns USB storage bought on the TradeMe web site, which
    appears to be a marketplace a bit like eBay. I didn't thoroughly read it
    but I didn't spot anywhere that clarified whether they sought out drives described as brand new or not. The fact that they bought 17 and found
    data on 15 suggests to me that they just bought second hand storage, so
    not at all surprising.

    https://www.reddit.com/r/privacy/comments/181241a/amazon_sold_me_a_drive_it_came_with_data_on_it/?utm_source=chatgpt.com

    https://indiandefencereview.com/a-man-bought-a-new-hard-drive-but-upon-plugging-it-in-he-discovered-800gb-of-files-worth-thousands/?utm_source=chatgpt.com

    If either of the above stories are true (note they do both come from
    reddit) they read like advice to check that storage devices that you buy
    as new actually *are* new.

    If any of that happened to me then I would be getting a refund for it
    not being as described, not carrying on using it. None of it persuades
    me to secure erase new storage devices, "just in case." But if it makes
    someone happy, I suppose there are worse ways for them to spend their
    time.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)