• distinction between disk image and disk clone

    From bilsch01@21:1/5 to All on Wed Feb 2 11:46:39 2022
    Q1: I have been using dd copy command to make a byte by byte copy of my
    HDD or SDD so I can restore it if necessary. Is that byte by byte copy
    called an image or a clone?
    Q2: I assume that terminology extends to a byte by byte copy of a
    partition - I mean you would use the same word to describe a byte by
    byte copy of a partition, right?
    Q3: There are backup programs (macrium) that create special files which
    the program can later use to restore a partition or disk to its previous working condition, but the files are quite different from a byte by byte
    copy. What are those files called?

    TIA Bill S.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to All on Wed Feb 2 18:22:01 2022
    On 2/2/2022 2:46 PM, bilsch01 wrote:
    Q1: I have been using dd copy command to make a byte by byte copy of my HDD or SDD so I can restore it if necessary. Is that byte by byte copy called an image or a clone?
    Q2: I assume that terminology extends to a byte by byte copy of a partition - I mean you would use the same word to describe a byte by byte copy of a partition, right?
    Q3: There are backup programs (macrium) that create special files which the program can later use to restore a partition or disk to its previous working condition, but the files are quite different from a byte by byte copy. What are those files called?

    TIA    Bill S.

    sudo dd if=/dev/sda of=disk.img # imaging

    sudo dd if=/dev/sda1 of=partition.img # imaging

    macrium , SDA , disk.mrimg # imaging (to a file) proprietary format
    ^^^

    macrium , disk.mrimg , SDA # image restoration

    *******************

    sudo dd if=/dev/sda of=/dev/sdb # clone

    macrium , SDA , SDB # clone (disk to disk)

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From stepore@21:1/5 to All on Wed Feb 2 19:57:44 2022
    On 02/02/2022 11:46 AM, bilsch01 wrote:
    Q1: I have been using dd copy command to make a byte by byte copy of my
    HDD or SDD so I can restore it if necessary. Is that byte by byte copy
    called an image or a clone?

    You _may_ want to consider using something like clonezilla or just
    partclone directly instead of dd.

    And by _may_ I mean dd is great. Love it. If you like it, keep using it.

    But for some large disks or partitions it could take a long time to
    clone. dd doesn't understand filesystems or data. It copies all blocks including unused space on a disk or partition. partclone copies only
    used blocks of a disk or partition. It _could_ be much faster (depending
    on disk/partition size and data).

    If you already knew all that, please ignore.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bilsch01@21:1/5 to stepore on Wed Feb 2 21:48:28 2022
    On 2/2/2022 7:57 PM, stepore wrote:
    On 02/02/2022 11:46 AM, bilsch01 wrote:
    Q1: I have been using dd copy command to make a byte by byte copy of my
    HDD or SDD so I can restore it if necessary. Is that byte by byte copy
    called an image or a clone?

    You _may_ want to consider using something like clonezilla or just
    partclone directly instead of dd.

    And by _may_ I mean dd is great. Love it. If you like it, keep using it.

    But for some large disks or partitions it could take a long time to
    clone. dd doesn't understand filesystems or data. It copies all blocks including unused space on a disk or partition. partclone copies  only
    used blocks of a disk or partition. It _could_ be much faster (depending
    on disk/partition size and data).

    If you already knew all that, please ignore.

    I was booting the PC with a USB thumb drive containing a linux system
    and dd copying the PC's 256 GB drive to an attached external 2TB USB
    drive. I want to get away from using linux dd because it's so tedious. I
    want to do a backup using Windows. Now I'm interested to run Macrium
    Free from the Windows partition of the PC drive and create a clone of
    the PC drive on the 2TB external drive. But I have the belief that
    Windows changes the contents of it's host partition a little bit as it
    runs so I don't really understand how it can clone the drive unless it completely refrains from doing that. I'll assume it completely refrains
    from modifying the drive as it runs.

    But what good is this kind of a backup anyway if you have to run Windows/Macrium to restore it. What if the Windows system on the PC is
    messed up and can't run? The only way I can visualize doing this is by
    booting a linux system on a thumb drive with the clone or image on an
    external USB drive.

    This brings me to a my main question: Why does it take 2100 seconds to
    image a 256 GB (2048 Gbits) drive if:

    the port on the PC is USB 3.2 Gen 1 and
    the port on the 2TB USB is USB 3.0

    The max speed for those interfaces is about 5 Gbits/sec.

    2048/5 = 410 sec, way less than 2100 sec

    TIA. Bill S.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to All on Thu Feb 3 11:43:25 2022
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    bilsch01 wrote:
    [...]
    This brings me to a my main question: Why does it take 2100 seconds to
    image a 256 GB (2048 Gbits) drive if:

    the port on the PC is USB 3.2 Gen 1 and
    the port on the 2TB USB is USB 3.0

    The max speed for those interfaces is about 5 Gbits/sec.

    2048/5 = 410 sec, way less than 2100 sec

    You don't seem to be accounting for any read/write limitations in the harddrives, or your internal SATA bus (assuming the internal drive is
    SATA)

    So - SATA3 -- 600MB/s throughput. Absolute minimum time to pull the
    entire HDD into RAM is 430 seconds. This would require 256 GiB of RAM available, plus additional RAM for standard O/S needs, plus write
    buffers for sending to USB. For the sake of discussion, let's just say
    we have that.

    Now we're at t=430, and send the signal to USB to kick it out the door
    to our USB-connected SATA drive. Well, there goes another 430 seconds (absolute minimum) because, again our external HDD is ultimately SATA3.

    Okay, we're up to 860 seconds, absolute bare minimum. But ...

    - your laptop doesn't have 512 GiB of RAM. So we're reading in smaller
    chunks
    - your external drive has a small buffer (maybe half a gig), So we're
    writing in smaller chunks
    - your CPU isn't spending 100% of its time focused on the task, so we
    have to wait in between each cycle of reading/writing however much
    data
    - other overheads or sources of delay.

    A big source of delay with 'dd' is selecting the wrong block size.
    IIRC, it defaults to 1024 B (K?), which means lots of small
    reads/writes. Changing that to a larger size can provide for some
    decent improvements (e.g. I usually use BS=4M when imaging USB drives
    from an installer ISO). Note that changing the block size tends to have
    a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
    fall somewhere in between 4 and 16).


    -----BEGIN PGP SIGNATURE-----

    iQIzBAEBCgAdFiEE3asj+xn6fYUcweBnbWVw5UznKGAFAmH7wAIACgkQbWVw5Uzn KGBKwQ//ZMAF5FAdw8AX3nvJVZ3puBVOE40gJGo4MjSyD8lpBxFgyMilN6icXuhZ 5mHyN4OG7ZgfIVPgSj8S3J6tsNXUp67k+YD731OeTHqlPcg/MuRU4iefuuHH6RSv hmiy10Oa/bP9MdAHO03XOuLUTmvE7Z02cN5kY1hFvrQfDRtpznFwiFkMOZWTFbXj S8rq6eE0jEWzpUWlL0RnUS2/zboSUps63DlWcw8neK0wMpOkHGEmNrIXznerBYM/ jtdMPm0Gypu3B+kKYEzSgPxHSdUrSKQHKHfa2kYtzRESJK20l0yyfPWJqwZT0Tkv Gqjs68v3+W0syevZSjTEC6mE6bPdYbDmc62DoTAhj0uzQWkXqn96OolXmfNT/rq4 v0cQNe43RrzNHY6tkw03zCTJvHc/d6tpzrBIYDuu2BLRbcPeYNsdlnnNBVIIE2tM z8nA+8nLOj6kHiZ1hmxDoS09348FWkj6yKVEv7l+OmTPKM2Bm7HdftayTzVRBHMt VaOg1uQ0IpBzCLxwO/kl8HEcZEdddD0O9jKU0psjKUR76wJWC5jwgrIeBgzJ5SCm UaYw0CSF/9MYQ2yk2vQnjm72nYRxOsuz1V00Hd6Jb8NmHbJPKivlJ1zya8x3OAfg bG3yeDah4zA5fzjlubHbjBZ7p8fketFc41xim/nM9FoxMu6ABDo=
    =nrk/
    -----END PGP SIGNATURE-----

    --
    |_|O|_| Github: https://github.com/dpurgert
    |_|_|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860
    |O|O|O|

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to All on Thu Feb 3 07:15:33 2022
    On 2/3/2022 12:48 AM, bilsch01 wrote:
    On 2/2/2022 7:57 PM, stepore wrote:
    On 02/02/2022 11:46 AM, bilsch01 wrote:
    Q1: I have been using dd copy command to make a byte by byte copy of my
    HDD or SDD so I can restore it if necessary. Is that byte by byte copy
    called an image or a clone?

    You _may_ want to consider using something like clonezilla or just partclone directly instead of dd.

    And by _may_ I mean dd is great. Love it. If you like it, keep using it.

    But for some large disks or partitions it could take a long time to clone. dd doesn't understand filesystems or data. It copies all blocks including unused space on a disk or partition. partclone copies  only used blocks of a disk or partition. It _
    could_ be much faster (depending on disk/partition size and data).

    If you already knew all that, please ignore.

    I was booting the PC with a USB thumb drive containing a linux system and dd copying the PC's 256 GB drive to an attached external 2TB USB drive. I want to get away from using linux dd because it's so tedious. I want to do a backup using Windows. Now I'
    m interested to run Macrium Free from the Windows partition of the PC drive and create a clone of the PC drive on the 2TB external drive. But I have the belief that Windows changes the contents of it's host partition a little bit as it runs so I don't
    really understand how it can clone the drive unless it completely refrains from doing that. I'll assume it completely refrains from modifying the drive as it runs.

    But what good is this kind of a backup anyway if you have to run Windows/Macrium to restore it. What if the Windows system on the PC is messed up and can't run? The only way I can visualize doing this is by booting a linux system on a thumb drive with
    the clone or image on an external USB drive.

    This brings me to a my main question: Why does it take 2100 seconds to image a 256 GB (2048 Gbits) drive if:

    the port on the PC is USB 3.2 Gen 1 and
    the port on the 2TB USB is USB 3.0

    The max speed for those interfaces is about 5 Gbits/sec.

    2048/5 = 410 sec, way less than 2100 sec

    TIA.    Bill S.


    Macrium backs up FAT32, NTFS, ExFAT, and... EXT partitions.

    Macrium software resides on Windows C: but it also resides
    on the Macrium Emergency Boot CD. You were supposed to make
    one of those, first thing. If the Windows OS disappears in a
    flood or fire, the Macrium CD allows you to do restores as desired.

    Macrium can do "hot" backups because of Volume Shadow Service.
    An image of C: is "frozen instantly". It's a Windows function.
    And you don't even need Macrium to do that. Some people freeze
    copies of C: for their own daily usage. Volume snapshots
    rely on a ten second "quiescence" period, where the OS asks
    programs to tidy up, if tidying up is easy, and there is
    a Provider function available for such usage.

    Things that don't quiesce properly, would be Exchange Server
    or perhaps Search Indexer. Macrium does not back up the
    Search Indexer Windows.edb database, but... who cares anyway :-)
    After a restore, you really want the Search Indexer to
    re-index the disk and make a new db.

    Only a couple of things are missing from a volume snapshot,
    and usually that's not a problem.

    There are a large number of Windows Backup products, a few of which
    offer free (subset) versions. And virtually all of those use
    VSS.

    When you boot the Macrium CD, it doesn't use VSS at that time, and
    that's because C: is not "busy" when the CD is the OS. When the
    CD provides the OS, the CD system partition is called X: while
    C: is just another partition waiting to be restored.

    You can do backup, restore, clone from the CD. Just as you can
    do backup, restore, clone from the C:\Program Files copy of the
    program.

    Macrium only backs up the parts of the disk that have data.
    A 1TB drive with 20GB of data, the .mrimg file will be 20GB.
    If you enable compression during the backup, the .mrimg might
    end up at 16GB instead. The .mrimg compressor is a lightweight
    one, and hardly compresses worth a darn. It's probably not
    even as good as GZIP.

    Macrium backup speed is limited by the need to compute a
    hash over the entire backup. That is used to detect
    corruption later. If you had two NVMe drives and were
    backing up one to the other, Macrium will never get even
    remotely close to the 7GB/sec of the best NVMe drives.
    It runs more in the neighborhood of HDD speeds.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Dan Purgert on Thu Feb 3 09:47:45 2022
    On 2/3/2022 6:43 AM, Dan Purgert wrote:
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    bilsch01 wrote:
    [...]
    This brings me to a my main question: Why does it take 2100 seconds to
    image a 256 GB (2048 Gbits) drive if:

    the port on the PC is USB 3.2 Gen 1 and
    the port on the 2TB USB is USB 3.0

    The max speed for those interfaces is about 5 Gbits/sec.

    2048/5 = 410 sec, way less than 2100 sec

    You don't seem to be accounting for any read/write limitations in the harddrives, or your internal SATA bus (assuming the internal drive is
    SATA)

    So - SATA3 -- 600MB/s throughput. Absolute minimum time to pull the
    entire HDD into RAM is 430 seconds. This would require 256 GiB of RAM available, plus additional RAM for standard O/S needs, plus write
    buffers for sending to USB. For the sake of discussion, let's just say
    we have that.

    Now we're at t=430, and send the signal to USB to kick it out the door
    to our USB-connected SATA drive. Well, there goes another 430 seconds (absolute minimum) because, again our external HDD is ultimately SATA3.

    Okay, we're up to 860 seconds, absolute bare minimum. But ...

    - your laptop doesn't have 512 GiB of RAM. So we're reading in smaller
    chunks
    - your external drive has a small buffer (maybe half a gig), So we're
    writing in smaller chunks
    - your CPU isn't spending 100% of its time focused on the task, so we
    have to wait in between each cycle of reading/writing however much
    data
    - other overheads or sources of delay.

    A big source of delay with 'dd' is selecting the wrong block size.
    IIRC, it defaults to 1024 B (K?), which means lots of small
    reads/writes. Changing that to a larger size can provide for some
    decent improvements (e.g. I usually use BS=4M when imaging USB drives
    from an installer ISO). Note that changing the block size tends to have
    a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
    fall somewhere in between 4 and 16).


    Absolutely.

    dd without a block size specified, uses 512 bytes for BlockSize,
    and the IO rate under those conditions is interrupt limited to around
    13MB to 39MB per second.

    sudo dd if=/dev/sda of=mydisk.img # defaults to bs=512 and at best 39MB/sec on HDD

    Even a relatively small block size, on a modern drive, is
    sufficient to run it at max.

    sudo dd if=/dev/sda of=mydisk.img bs=8192 # modern HDD runs 200MB/sec, their cache really works

    On legacy drives, you can try a value like this. Older
    HDD like a larger block. Check that 221184 divides into
    the device size in bytes.

    sudo dd if=/dev/sda of=mydisk.img bs=221184 # old drives don't use their cache chip!

    One of the dd-like commands, actually does test reads and adjusts
    the transfer size for the condition it finds. But the regular /bin/dd
    doesn't have that feature.

    *******

    If you put a really slow hard drive, into a USB3 enclosure,
    the resulting speed will be limited by the really slow hard drive.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to All on Thu Feb 3 17:12:08 2022
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    bilsch01 wrote:
    On 2/3/2022 6:47 AM, Paul wrote:
    On 2/3/2022 6:43 AM, Dan Purgert wrote:
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    bilsch01 wrote:
    [...]
    This brings me to a my main question: Why does it take 2100 seconds to >>>> image a 256 GB (2048 Gbits) drive if:

    the port on the PC is USB 3.2 Gen 1 and
    the port on the 2TB USB is USB 3.0

    The max speed for those interfaces is about 5 Gbits/sec.

    2048/5 = 410 sec, way less than 2100 sec
    [...]

    A big source of delay with 'dd' is selecting the wrong block size.
    IIRC, it defaults to 1024 B (K?), which means lots of small
    reads/writes.  Changing that to a larger size can provide for some
    decent improvements (e.g. I usually use BS=4M when imaging USB drives
    from an installer ISO).  Note that changing the block size tends to have >>> a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
    fall somewhere in between 4 and 16).


    Absolutely.

    dd without a block size specified, uses 512 bytes for BlockSize,
    [...]
    If you put a really slow hard drive, into a USB3 enclosure,
    the resulting speed will be limited by the really slow hard drive.

    2100 sec for dd copy of 256 GB using bs=4096
    2651 sec for dd copy of 256 GB using bs=4M
    same usb interfaces both times

    Sounds like one drive or the other isn't as fast as you think it is
    then.

    Are they both ACTUALLY SSDs?
    Or is it that your "2T USB 3" drive happens to be spinning rust in a
    pretty enclosure?


    -----BEGIN PGP SIGNATURE-----

    iQIzBAEBCgAdFiEE3asj+xn6fYUcweBnbWVw5UznKGAFAmH8DRAACgkQbWVw5Uzn KGDf9w/+MtO1XjDEUl7pSw2eQR8lflZdzmesVNPSdUGkoyOIY1Evu2VLFqLzZsxA 1QCPORhQCxys3z68HT9y0GeyEQ6cCMsii6qxbgcz+6XsEHINgqLh/g/UVtgN/zYh z893zbFnWoZHWMhQ31j3resXKrxjwUWXqlVnky+Y4boLctW3EUruOxqFFe/AwSSk +LhpIP9+wehJdkQFba/xaATcIbjHXqe6oThMU59SJgKrZ4f0Qsul5vIfLlq9jP7R 0ptB6ieYAntHrTo201K5EUmk/xoHvrp/r/5chIM7Xmkwo7VFNRmBh9ewsJcWVE3S 0HoIzDM+fiEOgT8EL+1l5cqAC36LzmvWd7Gp+RCvYgMHZmj/CFU4CNy3X8+aq5iF kNDdF8eqEcwpbRwHcDP3PHDCxj7HbAnRA9TzD/Cp+7nIX/RIALMCeXolxTs/XYnX GU5lJ0wbipzfR7NHoutCIESeo35dPjVqqR7VePFJ3/VYMQ7v7qmyqZFzw4ctJtZ9 dqI6Goe2uakoxHNqzhE+YaNep9XuLjQ5Kts+sUYG8DD8q3uDc4mHUBlLA/Jjk87h My5LEntOxcpIerxzwbR7lM/jzGTcxld1D7+m82WUWSDpm0gMwcqizp81Ttsu2Y2D cSNq9MYIIETkzsjPeCssImXJoLqzNp8rgcePZLWqjnAiMUVKUrY=
    =JKOj
    -----END PGP SIGNATURE-----

    --
    |_|O|_| Github: https://github.com/dpurgert
    |_|_|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860
    |O|O|O|

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bilsch01@21:1/5 to Paul on Thu Feb 3 08:59:45 2022
    On 2/3/2022 6:47 AM, Paul wrote:
    On 2/3/2022 6:43 AM, Dan Purgert wrote:
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    bilsch01 wrote:
    [...]
    This brings me to a my main question: Why does it take 2100 seconds to
    image a 256 GB (2048 Gbits) drive if:

    the port on the PC is USB 3.2 Gen 1 and
    the port on the 2TB USB is USB 3.0

    The max speed for those interfaces is about 5 Gbits/sec.

    2048/5 = 410 sec, way less than 2100 sec

    You don't seem to be accounting for any read/write limitations in the
    harddrives, or your internal SATA bus (assuming the internal drive is
    SATA)

    So - SATA3 -- 600MB/s throughput.  Absolute minimum time to pull the
    entire HDD into RAM is 430 seconds.  This would require 256 GiB of RAM
    available, plus additional RAM for standard O/S needs, plus write
    buffers for sending to USB.  For the sake of discussion, let's just say
    we have that.

    Now we're at t=430, and send the signal to USB to kick it out the door
    to our USB-connected SATA drive.  Well, there goes another 430 seconds
    (absolute minimum) because, again our external HDD is ultimately SATA3.

    Okay, we're up to 860 seconds, absolute bare minimum.  But ...

      - your laptop doesn't have 512 GiB of RAM.  So we're reading in smaller >>     chunks
      - your external drive has a small buffer (maybe half a gig), So we're
        writing in smaller chunks
      - your CPU isn't spending 100% of its time focused on the task, so we
        have to wait in between each cycle of reading/writing however much
        data
      - other overheads or sources of delay.

    A big source of delay with 'dd' is selecting the wrong block size.
    IIRC, it defaults to 1024 B (K?), which means lots of small
    reads/writes.  Changing that to a larger size can provide for some
    decent improvements (e.g. I usually use BS=4M when imaging USB drives
    from an installer ISO).  Note that changing the block size tends to have
    a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
    fall somewhere in between 4 and 16).


    Absolutely.

    dd without a block size specified, uses 512 bytes for BlockSize,
    and the IO rate under those conditions is interrupt limited to around
    13MB to 39MB per second.

       sudo dd if=/dev/sda of=mydisk.img            # defaults to bs=512
    and at best 39MB/sec on HDD

    Even a relatively small block size, on a modern drive, is
    sufficient to run it at max.

       sudo dd if=/dev/sda of=mydisk.img bs=8192    # modern HDD runs 200MB/sec, their cache really works

    On legacy drives, you can try a value like this. Older
    HDD like a larger block. Check that 221184 divides into
    the device size in bytes.

       sudo dd if=/dev/sda of=mydisk.img bs=221184  # old drives don't use their cache chip!

    One of the dd-like commands, actually does test reads and adjusts
    the transfer size for the condition it finds. But the regular /bin/dd
    doesn't have that feature.

    *******

    If you put a really slow hard drive, into a USB3 enclosure,
    the resulting speed will be limited by the really slow hard drive.

         Paul





    2100 sec for dd copy of 256 GB using bs=4096
    2651 sec for dd copy of 256 GB using bs=4M
    same usb interfaces both times

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bilsch01@21:1/5 to Dan Purgert on Thu Feb 3 09:26:30 2022
    On 2/3/2022 9:12 AM, Dan Purgert wrote:
    bilsch01 wrote:
    On 2/3/2022 6:47 AM, Paul wrote:
    On 2/3/2022 6:43 AM, Dan Purgert wrote:
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    bilsch01 wrote:
    [...]
    This brings me to a my main question: Why does it take 2100 seconds to >>>>> image a 256 GB (2048 Gbits) drive if:

    the port on the PC is USB 3.2 Gen 1 and
    the port on the 2TB USB is USB 3.0

    The max speed for those interfaces is about 5 Gbits/sec.

    2048/5 = 410 sec, way less than 2100 sec
    [...]

    A big source of delay with 'dd' is selecting the wrong block size.
    IIRC, it defaults to 1024 B (K?), which means lots of small
    reads/writes. Changing that to a larger size can provide for some
    decent improvements (e.g. I usually use BS=4M when imaging USB drives
    from an installer ISO). Note that changing the block size tends to have >>>> a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
    fall somewhere in between 4 and 16).


    Absolutely.

    dd without a block size specified, uses 512 bytes for BlockSize,
    [...]
    If you put a really slow hard drive, into a USB3 enclosure,
    the resulting speed will be limited by the really slow hard drive.

    2100 sec for dd copy of 256 GB using bs=4096
    2651 sec for dd copy of 256 GB using bs=4M
    same usb interfaces both times

    Sounds like one drive or the other isn't as fast as you think it is
    then.

    Are they both ACTUALLY SSDs?
    Or is it that your "2T USB 3" drive happens to be spinning rust in a
    pretty enclosure?


    the internal 256 GB SSD is nvme interface
    the ext 2TB USB is a spinning drive p/n WDBKUZ0020BBK-UB

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bilsch01@21:1/5 to Paul on Thu Feb 3 09:17:15 2022
    On 2/3/2022 6:47 AM, Paul wrote:
    On 2/3/2022 6:43 AM, Dan Purgert wrote:
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    bilsch01 wrote:
    [...]
    This brings me to a my main question: Why does it take 2100 seconds to
    image a 256 GB (2048 Gbits) drive if:

    the port on the PC is USB 3.2 Gen 1 and
    the port on the 2TB USB is USB 3.0

    The max speed for those interfaces is about 5 Gbits/sec.

    2048/5 = 410 sec, way less than 2100 sec

    You don't seem to be accounting for any read/write limitations in the
    harddrives, or your internal SATA bus (assuming the internal drive is
    SATA)

    So - SATA3 -- 600MB/s throughput.  Absolute minimum time to pull the
    entire HDD into RAM is 430 seconds.  This would require 256 GiB of RAM
    available, plus additional RAM for standard O/S needs, plus write
    buffers for sending to USB.  For the sake of discussion, let's just say
    we have that.

    Now we're at t=430, and send the signal to USB to kick it out the door
    to our USB-connected SATA drive.  Well, there goes another 430 seconds
    (absolute minimum) because, again our external HDD is ultimately SATA3.

    Okay, we're up to 860 seconds, absolute bare minimum.  But ...

      - your laptop doesn't have 512 GiB of RAM.  So we're reading in smaller >>     chunks
      - your external drive has a small buffer (maybe half a gig), So we're
        writing in smaller chunks
      - your CPU isn't spending 100% of its time focused on the task, so we
        have to wait in between each cycle of reading/writing however much
        data
      - other overheads or sources of delay.

    A big source of delay with 'dd' is selecting the wrong block size.
    IIRC, it defaults to 1024 B (K?), which means lots of small
    reads/writes.  Changing that to a larger size can provide for some
    decent improvements (e.g. I usually use BS=4M when imaging USB drives
    from an installer ISO).  Note that changing the block size tends to have
    a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
    fall somewhere in between 4 and 16).


    Absolutely.

    dd without a block size specified, uses 512 bytes for BlockSize,
    and the IO rate under those conditions is interrupt limited to around
    13MB to 39MB per second.

       sudo dd if=/dev/sda of=mydisk.img            # defaults to bs=512
    and at best 39MB/sec on HDD

    Even a relatively small block size, on a modern drive, is
    sufficient to run it at max.

       sudo dd if=/dev/sda of=mydisk.img bs=8192    # modern HDD runs 200MB/sec, their cache really works

    On legacy drives, you can try a value like this. Older
    HDD like a larger block. Check that 221184 divides into
    the device size in bytes.

       sudo dd if=/dev/sda of=mydisk.img bs=221184  # old drives don't use their cache chip!

    One of the dd-like commands, actually does test reads and adjusts
    the transfer size for the condition it finds. But the regular /bin/dd
    doesn't have that feature.

    *******

    If you put a really slow hard drive, into a USB3 enclosure,
    the resulting speed will be limited by the really slow hard drive.

         Paul




    dd if=/dev/nvme0n1/ of=/mnt/sda1/images/as020322 bs=4096
    conv=notrunc,noerror

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to All on Thu Feb 3 15:08:39 2022
    On 2/3/2022 12:26 PM, bilsch01 wrote:
    On 2/3/2022 9:12 AM, Dan Purgert wrote:
    bilsch01 wrote:
    On 2/3/2022 6:47 AM, Paul wrote:
    On 2/3/2022 6:43 AM, Dan Purgert wrote:
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    bilsch01 wrote:
    [...]
    This brings me to a my main question: Why does it take 2100 seconds to >>>>>> image a 256 GB (2048 Gbits) drive if:

    the port on the PC is USB 3.2 Gen 1 and
    the port on the 2TB USB is USB 3.0

    The max speed for those interfaces is about 5 Gbits/sec.

    2048/5 = 410 sec, way less than 2100 sec
    [...]

    A big source of delay with 'dd' is selecting the wrong block size.
    IIRC, it defaults to 1024 B (K?), which means lots of small
    reads/writes.  Changing that to a larger size can provide for some
    decent improvements (e.g. I usually use BS=4M when imaging USB drives >>>>> from an installer ISO).  Note that changing the block size tends to have >>>>> a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
    fall somewhere in between 4 and 16).


    Absolutely.

    dd without a block size specified, uses 512 bytes for BlockSize,
      [...]
    If you put a really slow hard drive, into a USB3 enclosure,
    the resulting speed will be limited by the really slow hard drive.

    2100 sec for dd copy of 256 GB using bs=4096
    2651 sec for dd copy of 256 GB using bs=4M
    same usb interfaces both times

    Sounds like one drive or the other isn't as fast as you think it is
    then.

    Are they both ACTUALLY SSDs?
    Or is it that your "2T USB 3" drive happens to be spinning rust in a
    pretty enclosure?


    the internal 256 GB SSD is nvme interface
    the ext 2TB USB is a spinning drive p/n WDBKUZ0020BBK-UB


    The only benchmark I could find, suggests ~100MB/sec sustained
    for the EasyStore 2TB HDD-USB3.

    Both benchmarks, the 2100 second one and the 2651 second one,
    are credible reports for that drive. No hits are showing up
    from wd.com when I do searches (thank you Google).

    The benchmark info I could find, was of low quality, so all
    I can conclude so far, is your external HDD is too slow to be
    showing off for us today. It's not a 200MB/sec HDD. It's a low
    power drive that runs off bus power. It's not supposed to draw
    more than 5V @ 1A when spinning up.

    Everything looks normal. The external drive is the slouch.

    If you do this

    sudo dd if=/dev/sda of=/dev/null bs=4096

    then you should get a speed report for a drive like "sda".

    By benching the whole drive, there is no opportunity for the
    cache to screw up the benchmark. If I do short transfers like
    this, the second run will be artificially fast. The test
    transfers should be larger than system RAM size.

    sudo dd if=/dev/sda of=/dev/null bs=4096 count=200000

    For example, I have an SSD on a USB3 cable, and this is the bench.

    bullwinkle@Roomba:~$ sudo dd if=/dev/sdb of=/dev/null bs=65536
    [sudo] password for bullwinkle:
    7814181+1 records in
    7814181+1 records out
    512110190592 bytes (512 GB, 477 GiB) copied, 1220.66 s, 420 MB/s bullwinkle@Roomba:~$

    Looks like 65536 does not divide evenly. And that's because
    the drive capacity was defined by the manufacturer, according
    to CHS rules and not power-of-two rules. And as you would expect,
    8192 is a factor. Seems to be a typical choice on modern drives.
    The number 221184 also divides into that drive size.

    bullwinkle@Roomba:~$ factor 512110190592
    512110190592: 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 7 11 13 257 bullwinkle@Roomba:~$

    I have some other storage devices here, with more 2's in the result :-)

    *******

    Switching to a smart backup tool, that knows which sectors
    need to be backed up, could reduce the quantity of data
    to be written to the external drive. There are a ton of ways
    to do that. The Macrium CD being just one of them.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bilsch01@21:1/5 to Paul on Thu Feb 3 20:23:53 2022
    On 2/3/22 12:08, Paul wrote:
    On 2/3/2022 12:26 PM, bilsch01 wrote:
    On 2/3/2022 9:12 AM, Dan Purgert wrote:
    bilsch01 wrote:
    On 2/3/2022 6:47 AM, Paul wrote:
    On 2/3/2022 6:43 AM, Dan Purgert wrote:
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    bilsch01 wrote:
    [...]
    This brings me to a my main question: Why does it take 2100
    seconds to
    image a 256 GB (2048 Gbits) drive if:

    the port on the PC is USB 3.2 Gen 1 and
    the port on the 2TB USB is USB 3.0

    The max speed for those interfaces is about 5 Gbits/sec.

    2048/5 = 410 sec, way less than 2100 sec
    [...]

    A big source of delay with 'dd' is selecting the wrong block size. >>>>>> IIRC, it defaults to 1024 B (K?), which means lots of small
    reads/writes.  Changing that to a larger size can provide for some >>>>>> decent improvements (e.g. I usually use BS=4M when imaging USB drives >>>>>> from an installer ISO).  Note that changing the block size tends
    to have
    a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
    fall somewhere in between 4 and 16).


    Absolutely.

    dd without a block size specified, uses 512 bytes for BlockSize,
      [...]
    If you put a really slow hard drive, into a USB3 enclosure,
    the resulting speed will be limited by the really slow hard drive.

    2100 sec for dd copy of 256 GB using bs=4096
    2651 sec for dd copy of 256 GB using bs=4M
    same usb interfaces both times

    Sounds like one drive or the other isn't as fast as you think it is
    then.

    Are they both ACTUALLY SSDs?
    Or is it that your "2T USB 3" drive happens to be spinning rust in a
    pretty enclosure?


    the internal 256 GB SSD is nvme interface
    the ext 2TB USB is a spinning drive p/n WDBKUZ0020BBK-UB


    The only benchmark I could find, suggests ~100MB/sec sustained
    for the EasyStore 2TB HDD-USB3.

    Both benchmarks, the 2100 second one and the 2651 second one,
    are credible reports for that drive. No hits are showing up
    from wd.com when I do searches (thank you Google).

    The benchmark info I could find, was of low quality, so all
    I can conclude so far, is your external HDD is too slow to be
    showing off for us today. It's not a 200MB/sec HDD. It's a low
    power drive that runs off bus power. It's not supposed to draw
    more than 5V @ 1A when spinning up.

    Everything looks normal. The external drive is the slouch.

    If you do this

       sudo dd if=/dev/sda of=/dev/null bs=4096

    then you should get a speed report for a drive like "sda".

    By benching the whole drive, there is no opportunity for the
    cache to screw up the benchmark. If I do short transfers like
    this, the second run will be artificially fast. The test
    transfers should be larger than system RAM size.

       sudo dd if=/dev/sda of=/dev/null bs=4096 count=200000

    For example, I have an SSD on a USB3 cable, and this is the bench.

    bullwinkle@Roomba:~$ sudo dd if=/dev/sdb of=/dev/null bs=65536
    [sudo] password for bullwinkle:
    7814181+1 records in
    7814181+1 records out
    512110190592 bytes (512 GB, 477 GiB) copied, 1220.66 s, 420 MB/s bullwinkle@Roomba:~$

    Looks like 65536 does not divide evenly. And that's because
    the drive capacity was defined by the manufacturer, according
    to CHS rules and not power-of-two rules. And as you would expect,
    8192 is a factor. Seems to be a typical choice on modern drives.
    The number 221184 also divides into that drive size.

    bullwinkle@Roomba:~$ factor 512110190592
    512110190592: 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 7 11 13 257 bullwinkle@Roomba:~$

    I have some other storage devices here, with more 2's in the result :-)

    *******

    Switching to a smart backup tool, that knows which sectors
    need to be backed up, could reduce the quantity of data
    to be written to the external drive. There are a ton of ways
    to do that. The Macrium CD being just one of them.

       Paul

    For some reason when I right click the icon for the 2TB drive Windows
    does not offer option to 'eject; or 'safely remove'. Weird. I always
    power it down before disconnecting it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bilsch01@21:1/5 to Paul on Thu Feb 3 20:17:54 2022
    On 2/3/22 12:08, Paul wrote:
    On 2/3/2022 12:26 PM, bilsch01 wrote:
    On 2/3/2022 9:12 AM, Dan Purgert wrote:
    bilsch01 wrote:
    On 2/3/2022 6:47 AM, Paul wrote:
    On 2/3/2022 6:43 AM, Dan Purgert wrote:
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    bilsch01 wrote:
    [...]
    This brings me to a my main question: Why does it take 2100
    seconds to
    image a 256 GB (2048 Gbits) drive if:

    the port on the PC is USB 3.2 Gen 1 and
    the port on the 2TB USB is USB 3.0

    The max speed for those interfaces is about 5 Gbits/sec.

    2048/5 = 410 sec, way less than 2100 sec
    [...]

    A big source of delay with 'dd' is selecting the wrong block size. >>>>>> IIRC, it defaults to 1024 B (K?), which means lots of small
    reads/writes.  Changing that to a larger size can provide for some >>>>>> decent improvements (e.g. I usually use BS=4M when imaging USB drives >>>>>> from an installer ISO).  Note that changing the block size tends
    to have
    a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
    fall somewhere in between 4 and 16).


    Absolutely.

    dd without a block size specified, uses 512 bytes for BlockSize,
      [...]
    If you put a really slow hard drive, into a USB3 enclosure,
    the resulting speed will be limited by the really slow hard drive.

    2100 sec for dd copy of 256 GB using bs=4096
    2651 sec for dd copy of 256 GB using bs=4M
    same usb interfaces both times

    Sounds like one drive or the other isn't as fast as you think it is
    then.

    Are they both ACTUALLY SSDs?
    Or is it that your "2T USB 3" drive happens to be spinning rust in a
    pretty enclosure?


    the internal 256 GB SSD is nvme interface
    the ext 2TB USB is a spinning drive p/n WDBKUZ0020BBK-UB


    The only benchmark I could find, suggests ~100MB/sec sustained
    for the EasyStore 2TB HDD-USB3.

    Both benchmarks, the 2100 second one and the 2651 second one,
    are credible reports for that drive. No hits are showing up
    from wd.com when I do searches (thank you Google).

    The benchmark info I could find, was of low quality, so all
    I can conclude so far, is your external HDD is too slow to be
    showing off for us today. It's not a 200MB/sec HDD. It's a low
    power drive that runs off bus power. It's not supposed to draw
    more than 5V @ 1A when spinning up.

    Everything looks normal. The external drive is the slouch.

    If you do this

       sudo dd if=/dev/sda of=/dev/null bs=4096

    then you should get a speed report for a drive like "sda".

    By benching the whole drive, there is no opportunity for the
    cache to screw up the benchmark. If I do short transfers like
    this, the second run will be artificially fast. The test
    transfers should be larger than system RAM size.

       sudo dd if=/dev/sda of=/dev/null bs=4096 count=200000

    For example, I have an SSD on a USB3 cable, and this is the bench.

    bullwinkle@Roomba:~$ sudo dd if=/dev/sdb of=/dev/null bs=65536
    [sudo] password for bullwinkle:
    7814181+1 records in
    7814181+1 records out
    512110190592 bytes (512 GB, 477 GiB) copied, 1220.66 s, 420 MB/s bullwinkle@Roomba:~$

    Looks like 65536 does not divide evenly. And that's because
    the drive capacity was defined by the manufacturer, according
    to CHS rules and not power-of-two rules. And as you would expect,
    8192 is a factor. Seems to be a typical choice on modern drives.
    The number 221184 also divides into that drive size.

    bullwinkle@Roomba:~$ factor 512110190592
    512110190592: 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 7 11 13 257 bullwinkle@Roomba:~$

    I have some other storage devices here, with more 2's in the result :-)

    *******

    Switching to a smart backup tool, that knows which sectors
    need to be backed up, could reduce the quantity of data
    to be written to the external drive. There are a ton of ways
    to do that. The Macrium CD being just one of them.

       Paul

    Thanks for the informative post, and earlier ones too. I'm saving that benchmark test for reference. You're right, the 2TB drive benchmarks at
    108 MB/s.

    This afternoon I tried out several imaging options using Macrium. It
    backs up my 256 GB internal drive in 12 minutes. That's plenty fast for
    me, especially compared to the 35 minutes of the dd copy method. The
    clunky 2TB drive is good enough for now. I'm happy to be using Macrium
    from now on. I've had the Free version for years and never took a close
    look at it. So much easier than dd method. dd has many useful
    applications though.

    Bill S.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to All on Thu Feb 3 23:58:37 2022
    On 2/3/2022 11:23 PM, bilsch01 wrote:

    For some reason when I right click the icon for the 2TB drive
    Windows does not offer option to 'eject; or 'safely remove'.
    Weird. I always power it down before disconnecting it.

    It might have something to do with the Removable Media Bit (RMB).
    That is a declaration by the drive, of whether it is removable
    media or not. The poster "Len" here, explains it a bit.

    https://answers.microsoft.com/en-us/windows/forum/all/usb-drive-does-not-have-an-eject-option/db7c73c4-2539-4ddd-b602-33e86d28a4f2

    I don't have a wide enough selection of devices, to demonstrate
    both types. As far as I know, all of mine can be ejected
    with Safely Remove. I don't have Passports, Easystores,
    or MyBooks for reference purposes.

    One of my enclosures, has the funny properly that when I
    connect the SSD to it, the SSD reports it did an
    "emergency power fail" and the counter in SMART records
    that. I have other device combos, where Safely Remove works,
    the device is parked, and when it shuts down, the device
    was expecting that to happen, so there is no complaint.

    It's probably not going to cause damage to the metadata on
    the SSD, but it is still concerning if something bad
    actually is happening. And the controller chip in that
    case, is an Asmedia.

    There is yet another reason that a device cannot eject.
    That's if it has a "busy status". One thing that Macrium
    has done in the past, is use a feature called "TXF". Which,
    if I spelled that right, is a transactional interface for
    NTFS. It's supposed to be the equivalent of atomic commit.
    When you save a file to a file system, you can set it up
    in such a way, that if a single bit is twiddled (like from
    0 to 1), the file goes from "invisible" to "committed complete
    with journal", all by flipping one bit. Such schemes remove a
    lot of intermediate states, so you cannot see or suffer
    from those states.

    One person in another group, was finding he could not eject
    his backup drive. If he went into Disk Management and selected
    "Offline" from the left-hand square, then the drive could be
    ejected. But, the next time he plugged in the drive, he
    would have to go back to Disk Management and change the
    status to "Online" again, before he could use the disk.
    We never did figure out a way to improve on that workaround.

    You would think quitting Macrium, would drop the hold Macrium
    had on the drive. But Macrium may have had some service that
    was doing that, and the service continued to run after the
    application had exited.

    I don't think that matches your symptoms, but it's one of the
    few peculiar things of note for Macrium.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bilsch01@21:1/5 to Paul on Fri Feb 4 07:20:25 2022
    On 2/3/2022 8:58 PM, Paul wrote:
    On 2/3/2022 11:23 PM, bilsch01 wrote:

    For some reason when I right click the icon for the 2TB drive Windows
    does not offer option to 'eject; or 'safely remove'. Weird. I always
    power it down before disconnecting it.

    It might have something to do with the Removable Media Bit (RMB).
    That is a declaration by the drive, of whether it is removable
    media or not. The poster "Len" here, explains it a bit.

    https://answers.microsoft.com/en-us/windows/forum/all/usb-drive-does-not-have-an-eject-option/db7c73c4-2539-4ddd-b602-33e86d28a4f2


    I don't have a wide enough selection of devices, to demonstrate
    both types. As far as I know, all of mine can be ejected
    with Safely Remove. I don't have Passports, Easystores,
    or MyBooks for reference purposes.

    One of my enclosures, has the funny properly that when I
    connect the SSD to it, the SSD reports it did an
    "emergency power fail" and the counter in SMART records
    that. I have other device combos, where Safely Remove works,
    the device is parked, and when it shuts down, the device
    was expecting that to happen, so there is no complaint.

    It's probably not going to cause damage to the metadata on
    the SSD, but it is still concerning if something bad
    actually is happening. And the controller chip in that
    case, is an Asmedia.

    There is yet another reason that a device cannot eject.
    That's if it has a "busy status". One thing that Macrium
    has done in the past, is use a feature called "TXF". Which,
    if I spelled that right, is a transactional interface for
    NTFS. It's supposed to be the equivalent of atomic commit.
    When you save a file to a file system, you can set it up
    in such a way, that if a single bit is twiddled (like from
    0 to 1), the file goes from "invisible" to "committed complete
    with journal", all by flipping one bit. Such schemes remove a
    lot of intermediate states, so you cannot see or suffer
    from those states.

    One person in another group, was finding he could not eject
    his backup drive. If he went into Disk Management and selected
    "Offline" from the left-hand square, then the drive could be
    ejected. But, the next time he plugged in the drive, he
    would have to go back to Disk Management and change the
    status to "Online" again, before he could use the disk.
    We never did figure out a way to improve on that workaround.

    You would think quitting Macrium, would drop the hold Macrium
    had on the drive. But Macrium may have had some service that
    was doing that, and the service continued to run after the
    application had exited.

    I don't think that matches your symptoms, but it's one of the
    few peculiar things of note for Macrium.

       Paul


    I needed to click 'show hidden icons' to see icon for drive.
    Thanks.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)