Q1: I have been using dd copy command to make a byte by byte copy of my HDD or SDD so I can restore it if necessary. Is that byte by byte copy called an image or a clone?
Q2: I assume that terminology extends to a byte by byte copy of a partition - I mean you would use the same word to describe a byte by byte copy of a partition, right?
Q3: There are backup programs (macrium) that create special files which the program can later use to restore a partition or disk to its previous working condition, but the files are quite different from a byte by byte copy. What are those files called?
TIA Bill S.
Q1: I have been using dd copy command to make a byte by byte copy of my
HDD or SDD so I can restore it if necessary. Is that byte by byte copy
called an image or a clone?
On 02/02/2022 11:46 AM, bilsch01 wrote:
Q1: I have been using dd copy command to make a byte by byte copy of my
HDD or SDD so I can restore it if necessary. Is that byte by byte copy
called an image or a clone?
You _may_ want to consider using something like clonezilla or just
partclone directly instead of dd.
And by _may_ I mean dd is great. Love it. If you like it, keep using it.
But for some large disks or partitions it could take a long time to
clone. dd doesn't understand filesystems or data. It copies all blocks including unused space on a disk or partition. partclone copies only
used blocks of a disk or partition. It _could_ be much faster (depending
on disk/partition size and data).
If you already knew all that, please ignore.
[...]
This brings me to a my main question: Why does it take 2100 seconds to
image a 256 GB (2048 Gbits) drive if:
the port on the PC is USB 3.2 Gen 1 and
the port on the 2TB USB is USB 3.0
The max speed for those interfaces is about 5 Gbits/sec.
2048/5 = 410 sec, way less than 2100 sec
On 2/2/2022 7:57 PM, stepore wrote:could_ be much faster (depending on disk/partition size and data).
On 02/02/2022 11:46 AM, bilsch01 wrote:
Q1: I have been using dd copy command to make a byte by byte copy of my
HDD or SDD so I can restore it if necessary. Is that byte by byte copy
called an image or a clone?
You _may_ want to consider using something like clonezilla or just partclone directly instead of dd.
And by _may_ I mean dd is great. Love it. If you like it, keep using it.
But for some large disks or partitions it could take a long time to clone. dd doesn't understand filesystems or data. It copies all blocks including unused space on a disk or partition. partclone copies only used blocks of a disk or partition. It _
m interested to run Macrium Free from the Windows partition of the PC drive and create a clone of the PC drive on the 2TB external drive. But I have the belief that Windows changes the contents of it's host partition a little bit as it runs so I don't
If you already knew all that, please ignore.
I was booting the PC with a USB thumb drive containing a linux system and dd copying the PC's 256 GB drive to an attached external 2TB USB drive. I want to get away from using linux dd because it's so tedious. I want to do a backup using Windows. Now I'
But what good is this kind of a backup anyway if you have to run Windows/Macrium to restore it. What if the Windows system on the PC is messed up and can't run? The only way I can visualize doing this is by booting a linux system on a thumb drive withthe clone or image on an external USB drive.
This brings me to a my main question: Why does it take 2100 seconds to image a 256 GB (2048 Gbits) drive if:
the port on the PC is USB 3.2 Gen 1 and
the port on the 2TB USB is USB 3.0
The max speed for those interfaces is about 5 Gbits/sec.
2048/5 = 410 sec, way less than 2100 sec
TIA. Bill S.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
bilsch01 wrote:
[...]
This brings me to a my main question: Why does it take 2100 seconds to
image a 256 GB (2048 Gbits) drive if:
the port on the PC is USB 3.2 Gen 1 and
the port on the 2TB USB is USB 3.0
The max speed for those interfaces is about 5 Gbits/sec.
2048/5 = 410 sec, way less than 2100 sec
You don't seem to be accounting for any read/write limitations in the harddrives, or your internal SATA bus (assuming the internal drive is
SATA)
So - SATA3 -- 600MB/s throughput. Absolute minimum time to pull the
entire HDD into RAM is 430 seconds. This would require 256 GiB of RAM available, plus additional RAM for standard O/S needs, plus write
buffers for sending to USB. For the sake of discussion, let's just say
we have that.
Now we're at t=430, and send the signal to USB to kick it out the door
to our USB-connected SATA drive. Well, there goes another 430 seconds (absolute minimum) because, again our external HDD is ultimately SATA3.
Okay, we're up to 860 seconds, absolute bare minimum. But ...
- your laptop doesn't have 512 GiB of RAM. So we're reading in smaller
chunks
- your external drive has a small buffer (maybe half a gig), So we're
writing in smaller chunks
- your CPU isn't spending 100% of its time focused on the task, so we
have to wait in between each cycle of reading/writing however much
data
- other overheads or sources of delay.
A big source of delay with 'dd' is selecting the wrong block size.
IIRC, it defaults to 1024 B (K?), which means lots of small
reads/writes. Changing that to a larger size can provide for some
decent improvements (e.g. I usually use BS=4M when imaging USB drives
from an installer ISO). Note that changing the block size tends to have
a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
fall somewhere in between 4 and 16).
On 2/3/2022 6:47 AM, Paul wrote:
On 2/3/2022 6:43 AM, Dan Purgert wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
bilsch01 wrote:
[...][...]
This brings me to a my main question: Why does it take 2100 seconds to >>>> image a 256 GB (2048 Gbits) drive if:
the port on the PC is USB 3.2 Gen 1 and
the port on the 2TB USB is USB 3.0
The max speed for those interfaces is about 5 Gbits/sec.
2048/5 = 410 sec, way less than 2100 sec
A big source of delay with 'dd' is selecting the wrong block size.
IIRC, it defaults to 1024 B (K?), which means lots of small
reads/writes. Changing that to a larger size can provide for some
decent improvements (e.g. I usually use BS=4M when imaging USB drives
from an installer ISO). Note that changing the block size tends to have >>> a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
fall somewhere in between 4 and 16).
Absolutely.
dd without a block size specified, uses 512 bytes for BlockSize,
[...]
If you put a really slow hard drive, into a USB3 enclosure,
the resulting speed will be limited by the really slow hard drive.
2100 sec for dd copy of 256 GB using bs=4096
2651 sec for dd copy of 256 GB using bs=4M
same usb interfaces both times
On 2/3/2022 6:43 AM, Dan Purgert wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
bilsch01 wrote:
[...]
This brings me to a my main question: Why does it take 2100 seconds to
image a 256 GB (2048 Gbits) drive if:
the port on the PC is USB 3.2 Gen 1 and
the port on the 2TB USB is USB 3.0
The max speed for those interfaces is about 5 Gbits/sec.
2048/5 = 410 sec, way less than 2100 sec
You don't seem to be accounting for any read/write limitations in the
harddrives, or your internal SATA bus (assuming the internal drive is
SATA)
So - SATA3 -- 600MB/s throughput. Absolute minimum time to pull the
entire HDD into RAM is 430 seconds. This would require 256 GiB of RAM
available, plus additional RAM for standard O/S needs, plus write
buffers for sending to USB. For the sake of discussion, let's just say
we have that.
Now we're at t=430, and send the signal to USB to kick it out the door
to our USB-connected SATA drive. Well, there goes another 430 seconds
(absolute minimum) because, again our external HDD is ultimately SATA3.
Okay, we're up to 860 seconds, absolute bare minimum. But ...
- your laptop doesn't have 512 GiB of RAM. So we're reading in smaller >> chunks
- your external drive has a small buffer (maybe half a gig), So we're
writing in smaller chunks
- your CPU isn't spending 100% of its time focused on the task, so we
have to wait in between each cycle of reading/writing however much
data
- other overheads or sources of delay.
A big source of delay with 'dd' is selecting the wrong block size.
IIRC, it defaults to 1024 B (K?), which means lots of small
reads/writes. Changing that to a larger size can provide for some
decent improvements (e.g. I usually use BS=4M when imaging USB drives
from an installer ISO). Note that changing the block size tends to have
a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
fall somewhere in between 4 and 16).
Absolutely.
dd without a block size specified, uses 512 bytes for BlockSize,
and the IO rate under those conditions is interrupt limited to around
13MB to 39MB per second.
sudo dd if=/dev/sda of=mydisk.img # defaults to bs=512
and at best 39MB/sec on HDD
Even a relatively small block size, on a modern drive, is
sufficient to run it at max.
sudo dd if=/dev/sda of=mydisk.img bs=8192 # modern HDD runs 200MB/sec, their cache really works
On legacy drives, you can try a value like this. Older
HDD like a larger block. Check that 221184 divides into
the device size in bytes.
sudo dd if=/dev/sda of=mydisk.img bs=221184 # old drives don't use their cache chip!
One of the dd-like commands, actually does test reads and adjusts
the transfer size for the condition it finds. But the regular /bin/dd
doesn't have that feature.
*******
If you put a really slow hard drive, into a USB3 enclosure,
the resulting speed will be limited by the really slow hard drive.
Paul
bilsch01 wrote:
On 2/3/2022 6:47 AM, Paul wrote:
On 2/3/2022 6:43 AM, Dan Purgert wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
bilsch01 wrote:
[...][...]
This brings me to a my main question: Why does it take 2100 seconds to >>>>> image a 256 GB (2048 Gbits) drive if:
the port on the PC is USB 3.2 Gen 1 and
the port on the 2TB USB is USB 3.0
The max speed for those interfaces is about 5 Gbits/sec.
2048/5 = 410 sec, way less than 2100 sec
A big source of delay with 'dd' is selecting the wrong block size.
IIRC, it defaults to 1024 B (K?), which means lots of small
reads/writes. Changing that to a larger size can provide for some
decent improvements (e.g. I usually use BS=4M when imaging USB drives
from an installer ISO). Note that changing the block size tends to have >>>> a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
fall somewhere in between 4 and 16).
Absolutely.
dd without a block size specified, uses 512 bytes for BlockSize,
[...]
If you put a really slow hard drive, into a USB3 enclosure,
the resulting speed will be limited by the really slow hard drive.
2100 sec for dd copy of 256 GB using bs=4096
2651 sec for dd copy of 256 GB using bs=4M
same usb interfaces both times
Sounds like one drive or the other isn't as fast as you think it is
then.
Are they both ACTUALLY SSDs?
Or is it that your "2T USB 3" drive happens to be spinning rust in a
pretty enclosure?
On 2/3/2022 6:43 AM, Dan Purgert wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
bilsch01 wrote:
[...]
This brings me to a my main question: Why does it take 2100 seconds to
image a 256 GB (2048 Gbits) drive if:
the port on the PC is USB 3.2 Gen 1 and
the port on the 2TB USB is USB 3.0
The max speed for those interfaces is about 5 Gbits/sec.
2048/5 = 410 sec, way less than 2100 sec
You don't seem to be accounting for any read/write limitations in the
harddrives, or your internal SATA bus (assuming the internal drive is
SATA)
So - SATA3 -- 600MB/s throughput. Absolute minimum time to pull the
entire HDD into RAM is 430 seconds. This would require 256 GiB of RAM
available, plus additional RAM for standard O/S needs, plus write
buffers for sending to USB. For the sake of discussion, let's just say
we have that.
Now we're at t=430, and send the signal to USB to kick it out the door
to our USB-connected SATA drive. Well, there goes another 430 seconds
(absolute minimum) because, again our external HDD is ultimately SATA3.
Okay, we're up to 860 seconds, absolute bare minimum. But ...
- your laptop doesn't have 512 GiB of RAM. So we're reading in smaller >> chunks
- your external drive has a small buffer (maybe half a gig), So we're
writing in smaller chunks
- your CPU isn't spending 100% of its time focused on the task, so we
have to wait in between each cycle of reading/writing however much
data
- other overheads or sources of delay.
A big source of delay with 'dd' is selecting the wrong block size.
IIRC, it defaults to 1024 B (K?), which means lots of small
reads/writes. Changing that to a larger size can provide for some
decent improvements (e.g. I usually use BS=4M when imaging USB drives
from an installer ISO). Note that changing the block size tends to have
a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
fall somewhere in between 4 and 16).
Absolutely.
dd without a block size specified, uses 512 bytes for BlockSize,
and the IO rate under those conditions is interrupt limited to around
13MB to 39MB per second.
sudo dd if=/dev/sda of=mydisk.img # defaults to bs=512
and at best 39MB/sec on HDD
Even a relatively small block size, on a modern drive, is
sufficient to run it at max.
sudo dd if=/dev/sda of=mydisk.img bs=8192 # modern HDD runs 200MB/sec, their cache really works
On legacy drives, you can try a value like this. Older
HDD like a larger block. Check that 221184 divides into
the device size in bytes.
sudo dd if=/dev/sda of=mydisk.img bs=221184 # old drives don't use their cache chip!
One of the dd-like commands, actually does test reads and adjusts
the transfer size for the condition it finds. But the regular /bin/dd
doesn't have that feature.
*******
If you put a really slow hard drive, into a USB3 enclosure,
the resulting speed will be limited by the really slow hard drive.
Paul
On 2/3/2022 9:12 AM, Dan Purgert wrote:
bilsch01 wrote:the internal 256 GB SSD is nvme interface
On 2/3/2022 6:47 AM, Paul wrote:
On 2/3/2022 6:43 AM, Dan Purgert wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
bilsch01 wrote:
[...][...]
This brings me to a my main question: Why does it take 2100 seconds to >>>>>> image a 256 GB (2048 Gbits) drive if:
the port on the PC is USB 3.2 Gen 1 and
the port on the 2TB USB is USB 3.0
The max speed for those interfaces is about 5 Gbits/sec.
2048/5 = 410 sec, way less than 2100 sec
A big source of delay with 'dd' is selecting the wrong block size.
IIRC, it defaults to 1024 B (K?), which means lots of small
reads/writes. Changing that to a larger size can provide for some
decent improvements (e.g. I usually use BS=4M when imaging USB drives >>>>> from an installer ISO). Note that changing the block size tends to have >>>>> a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
fall somewhere in between 4 and 16).
Absolutely.
dd without a block size specified, uses 512 bytes for BlockSize,
[...]
If you put a really slow hard drive, into a USB3 enclosure,
the resulting speed will be limited by the really slow hard drive.
2100 sec for dd copy of 256 GB using bs=4096
2651 sec for dd copy of 256 GB using bs=4M
same usb interfaces both times
Sounds like one drive or the other isn't as fast as you think it is
then.
Are they both ACTUALLY SSDs?
Or is it that your "2T USB 3" drive happens to be spinning rust in a
pretty enclosure?
the ext 2TB USB is a spinning drive p/n WDBKUZ0020BBK-UB
On 2/3/2022 12:26 PM, bilsch01 wrote:
On 2/3/2022 9:12 AM, Dan Purgert wrote:
bilsch01 wrote:the internal 256 GB SSD is nvme interface
On 2/3/2022 6:47 AM, Paul wrote:
On 2/3/2022 6:43 AM, Dan Purgert wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
bilsch01 wrote:
[...][...]
This brings me to a my main question: Why does it take 2100
seconds to
image a 256 GB (2048 Gbits) drive if:
the port on the PC is USB 3.2 Gen 1 and
the port on the 2TB USB is USB 3.0
The max speed for those interfaces is about 5 Gbits/sec.
2048/5 = 410 sec, way less than 2100 sec
A big source of delay with 'dd' is selecting the wrong block size. >>>>>> IIRC, it defaults to 1024 B (K?), which means lots of small
reads/writes. Changing that to a larger size can provide for some >>>>>> decent improvements (e.g. I usually use BS=4M when imaging USB drives >>>>>> from an installer ISO). Note that changing the block size tends
to have
a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
fall somewhere in between 4 and 16).
Absolutely.
dd without a block size specified, uses 512 bytes for BlockSize,
[...]
If you put a really slow hard drive, into a USB3 enclosure,
the resulting speed will be limited by the really slow hard drive.
2100 sec for dd copy of 256 GB using bs=4096
2651 sec for dd copy of 256 GB using bs=4M
same usb interfaces both times
Sounds like one drive or the other isn't as fast as you think it is
then.
Are they both ACTUALLY SSDs?
Or is it that your "2T USB 3" drive happens to be spinning rust in a
pretty enclosure?
the ext 2TB USB is a spinning drive p/n WDBKUZ0020BBK-UB
The only benchmark I could find, suggests ~100MB/sec sustained
for the EasyStore 2TB HDD-USB3.
Both benchmarks, the 2100 second one and the 2651 second one,
are credible reports for that drive. No hits are showing up
from wd.com when I do searches (thank you Google).
The benchmark info I could find, was of low quality, so all
I can conclude so far, is your external HDD is too slow to be
showing off for us today. It's not a 200MB/sec HDD. It's a low
power drive that runs off bus power. It's not supposed to draw
more than 5V @ 1A when spinning up.
Everything looks normal. The external drive is the slouch.
If you do this
sudo dd if=/dev/sda of=/dev/null bs=4096
then you should get a speed report for a drive like "sda".
By benching the whole drive, there is no opportunity for the
cache to screw up the benchmark. If I do short transfers like
this, the second run will be artificially fast. The test
transfers should be larger than system RAM size.
sudo dd if=/dev/sda of=/dev/null bs=4096 count=200000
For example, I have an SSD on a USB3 cable, and this is the bench.
bullwinkle@Roomba:~$ sudo dd if=/dev/sdb of=/dev/null bs=65536
[sudo] password for bullwinkle:
7814181+1 records in
7814181+1 records out
512110190592 bytes (512 GB, 477 GiB) copied, 1220.66 s, 420 MB/s bullwinkle@Roomba:~$
Looks like 65536 does not divide evenly. And that's because
the drive capacity was defined by the manufacturer, according
to CHS rules and not power-of-two rules. And as you would expect,
8192 is a factor. Seems to be a typical choice on modern drives.
The number 221184 also divides into that drive size.
bullwinkle@Roomba:~$ factor 512110190592
512110190592: 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 7 11 13 257 bullwinkle@Roomba:~$
I have some other storage devices here, with more 2's in the result :-)
*******
Switching to a smart backup tool, that knows which sectors
need to be backed up, could reduce the quantity of data
to be written to the external drive. There are a ton of ways
to do that. The Macrium CD being just one of them.
Paul
On 2/3/2022 12:26 PM, bilsch01 wrote:
On 2/3/2022 9:12 AM, Dan Purgert wrote:
bilsch01 wrote:the internal 256 GB SSD is nvme interface
On 2/3/2022 6:47 AM, Paul wrote:
On 2/3/2022 6:43 AM, Dan Purgert wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
bilsch01 wrote:
[...][...]
This brings me to a my main question: Why does it take 2100
seconds to
image a 256 GB (2048 Gbits) drive if:
the port on the PC is USB 3.2 Gen 1 and
the port on the 2TB USB is USB 3.0
The max speed for those interfaces is about 5 Gbits/sec.
2048/5 = 410 sec, way less than 2100 sec
A big source of delay with 'dd' is selecting the wrong block size. >>>>>> IIRC, it defaults to 1024 B (K?), which means lots of small
reads/writes. Changing that to a larger size can provide for some >>>>>> decent improvements (e.g. I usually use BS=4M when imaging USB drives >>>>>> from an installer ISO). Note that changing the block size tends
to have
a bit of a curve (i.e. 16M may prove better than 4M; but 32M may
fall somewhere in between 4 and 16).
Absolutely.
dd without a block size specified, uses 512 bytes for BlockSize,
[...]
If you put a really slow hard drive, into a USB3 enclosure,
the resulting speed will be limited by the really slow hard drive.
2100 sec for dd copy of 256 GB using bs=4096
2651 sec for dd copy of 256 GB using bs=4M
same usb interfaces both times
Sounds like one drive or the other isn't as fast as you think it is
then.
Are they both ACTUALLY SSDs?
Or is it that your "2T USB 3" drive happens to be spinning rust in a
pretty enclosure?
the ext 2TB USB is a spinning drive p/n WDBKUZ0020BBK-UB
The only benchmark I could find, suggests ~100MB/sec sustained
for the EasyStore 2TB HDD-USB3.
Both benchmarks, the 2100 second one and the 2651 second one,
are credible reports for that drive. No hits are showing up
from wd.com when I do searches (thank you Google).
The benchmark info I could find, was of low quality, so all
I can conclude so far, is your external HDD is too slow to be
showing off for us today. It's not a 200MB/sec HDD. It's a low
power drive that runs off bus power. It's not supposed to draw
more than 5V @ 1A when spinning up.
Everything looks normal. The external drive is the slouch.
If you do this
sudo dd if=/dev/sda of=/dev/null bs=4096
then you should get a speed report for a drive like "sda".
By benching the whole drive, there is no opportunity for the
cache to screw up the benchmark. If I do short transfers like
this, the second run will be artificially fast. The test
transfers should be larger than system RAM size.
sudo dd if=/dev/sda of=/dev/null bs=4096 count=200000
For example, I have an SSD on a USB3 cable, and this is the bench.
bullwinkle@Roomba:~$ sudo dd if=/dev/sdb of=/dev/null bs=65536
[sudo] password for bullwinkle:
7814181+1 records in
7814181+1 records out
512110190592 bytes (512 GB, 477 GiB) copied, 1220.66 s, 420 MB/s bullwinkle@Roomba:~$
Looks like 65536 does not divide evenly. And that's because
the drive capacity was defined by the manufacturer, according
to CHS rules and not power-of-two rules. And as you would expect,
8192 is a factor. Seems to be a typical choice on modern drives.
The number 221184 also divides into that drive size.
bullwinkle@Roomba:~$ factor 512110190592
512110190592: 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 7 11 13 257 bullwinkle@Roomba:~$
I have some other storage devices here, with more 2's in the result :-)
*******
Switching to a smart backup tool, that knows which sectors
need to be backed up, could reduce the quantity of data
to be written to the external drive. There are a ton of ways
to do that. The Macrium CD being just one of them.
Paul
For some reason when I right click the icon for the 2TB drive
Windows does not offer option to 'eject; or 'safely remove'.
Weird. I always power it down before disconnecting it.
On 2/3/2022 11:23 PM, bilsch01 wrote:
For some reason when I right click the icon for the 2TB drive Windows
does not offer option to 'eject; or 'safely remove'. Weird. I always
power it down before disconnecting it.
It might have something to do with the Removable Media Bit (RMB).
That is a declaration by the drive, of whether it is removable
media or not. The poster "Len" here, explains it a bit.
https://answers.microsoft.com/en-us/windows/forum/all/usb-drive-does-not-have-an-eject-option/db7c73c4-2539-4ddd-b602-33e86d28a4f2
I don't have a wide enough selection of devices, to demonstrate
both types. As far as I know, all of mine can be ejected
with Safely Remove. I don't have Passports, Easystores,
or MyBooks for reference purposes.
One of my enclosures, has the funny properly that when I
connect the SSD to it, the SSD reports it did an
"emergency power fail" and the counter in SMART records
that. I have other device combos, where Safely Remove works,
the device is parked, and when it shuts down, the device
was expecting that to happen, so there is no complaint.
It's probably not going to cause damage to the metadata on
the SSD, but it is still concerning if something bad
actually is happening. And the controller chip in that
case, is an Asmedia.
There is yet another reason that a device cannot eject.
That's if it has a "busy status". One thing that Macrium
has done in the past, is use a feature called "TXF". Which,
if I spelled that right, is a transactional interface for
NTFS. It's supposed to be the equivalent of atomic commit.
When you save a file to a file system, you can set it up
in such a way, that if a single bit is twiddled (like from
0 to 1), the file goes from "invisible" to "committed complete
with journal", all by flipping one bit. Such schemes remove a
lot of intermediate states, so you cannot see or suffer
from those states.
One person in another group, was finding he could not eject
his backup drive. If he went into Disk Management and selected
"Offline" from the left-hand square, then the drive could be
ejected. But, the next time he plugged in the drive, he
would have to go back to Disk Management and change the
status to "Online" again, before he could use the disk.
We never did figure out a way to improve on that workaround.
You would think quitting Macrium, would drop the hold Macrium
had on the drive. But Macrium may have had some service that
was doing that, and the service continued to run after the
application had exited.
I don't think that matches your symptoms, but it's one of the
few peculiar things of note for Macrium.
Paul
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 498 |
Nodes: | 16 (2 / 14) |
Uptime: | 35:38:46 |
Calls: | 9,798 |
Files: | 13,751 |
Messages: | 6,189,275 |