ultred ragnusen wrote:
I'll read up s'more on ddrescue. Thanks. We should probably take this out
of the r.p.d ng because it's off topic so I'll post a separate thread.
If you have something like Ubuntu, try "gddrescue".
The fan died on my Ubuntu 16.04 laptop, but I can torrent the Ubuntu ISO
and boot off of it or even re-install VirtualBox (although it took a long
time before I had VirtualBox working on the original HDD).
I did torrent Knoppix though and burned it to a bootable DVD image,
although even with almost 2,500 peers, it still took a long while to
The package name and the executable name, don't have
to be the same. That's what adds to the joy of figuring it out.
I think I'll try Knoppix first, and then, perhaps Recuva, where your advice
to back up the data /before/ handing the desktop tower to Microsoft is good
ddrescue is a utility that tolerates "CRC error" when reading
a disk. You can make one run after another, and the "log" file
keeps track of what sectors have not been recovered yet. Looking
at the log, you get some idea how much damage remains (in terms
of CRC errors).
Personally, I don't think there is any /damage/ to the HDD; I think it's as
simple as Microsoft Windows Update screwed up, perhaps because I had
customized the heck out of the system (Winaero, Classic Shell, etc.).
ddrescue is mechanical and captures sectors. It doesn't know
or care whether the partition is NTFS or EXT4. Doesn't matter.
Thanks for the ddrescue advice. I'm not sure yet if it's a standalone
bootable tool or something that runs inside of Windows or Linux but I'll
work it out after trying out Knoppix with the bad HDD connected via the
SATA adapter to the USB port on the desktop booted to Knoppix.
Since the disk is "suspected good" at the hardware level,
you can use "dd" on it. Knoppix will have a copy. Every Linux
distro has "dd" on it. The "dd" utility does not tolerate CRC
errors like gddrescue does. Which is fine with your hard drive.
sudo dd if=/dev/sda of=/media/sparedrive/bigdisk.bin bs=512 count=...
That's the general format for storing every sector on a hard
drive, as a "very large file" stored on a second disk.
If you have two 1TB drives, then obviously when you make the
/media/sparedrive partition, it will be slightly smaller than
the thing you are copying.
However, sometimes partitions support compression. You can
also chain commands together in the command line.
Adjust the arithmetic product of blocksize and count parameters,
so the entire disk is copied. Unlike gddrescue with its adaptive
transfer scheme, "dd" expects you to do the math and copy
as much or as little of the drive as you'd like. For example,
the second command here would transfer around 1.2GB or so.
sudo dd if=/dev/sda bs=512 count=12345678 | gzip -3 /media/sparedrive/bigdisk.bin.gz
sudo dd if=/dev/sda bs=1048576 count=1234 | gzip -3 /media/sparedrive/bigdisk.bin.gz
Since you're in Linux, you can try...
sudo fdisk /dev/sda
and get size info for the disk. Then, use the factor program,
to see what number makes a good fit for blocksize "bs" parameter.
(bs * count) must equal the total size info you got.
Let's try an example. This is a disk sitting on my Test Machine.
:~$ sudo fdisk /dev/sda
Welcome to fdisk (util-linux 2.27.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): p
Disk /dev/sda: 477 GiB, 512110190592 bytes, 1000215216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x72ca3ed1
Now I type "q" to quit, and move on to the next command.
:~$ factor 512110190592
512110190592: 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 7 11 13 257
2^13 = 8192, which is a pretty small block size. Some newer
drives will run at the sustained transfer rate, even with that
small of a block size parameter. What I can do, is throw in
3^3 to make it a bit larger. 2^13 * 3^3 = 221184 bytes.
Dividing 512110190592 by 221184, completes the job (2315313)
To copy my 500GB specimen, I'd use
sudo dd if=/dev/sda of=/media/sparedrive/bigdisk.bin bs=221184 count=2315313
knowing that I'm getting every sector of the source. If the
destination drive is slightly too small, I have the option
of piping the output into a compression command of some sort.
There's possibly a p7zip-full package and a command line 7zip invocation
to achieve a higher compression ratio. But it would be
There is also the pigz package, which is like gzip only it allows
more than one CPU core to be used. The ZIP that 7ZIP does, uses
a single core by comparison, when compressing. Some other 7ZIP
formats, use multiple cores.
sudo dd if=/dev/sda bs=221184 count=2315313 | pigz -3 -p 4 out.bin.gz
Anyway, I'm sure you'll figure out something.
To restore the disk later, it would be something like
unpigz -c out.bin.gz | sudo dd of=/dev/sda bs=221184 count=2315313
On some platforms, you can use if=- to stand for "stdin" and
of=- for "stdout". But it's also possible the command understands
the piping situation and the "missing" portion of the command,
to mean the same thing. That's why my last command doesn't have
an input file specification.
To do something like this (i.e. not specify if= and of=),
it's going to copy stdin to stdout.
cat sample.bin | dd destination.bin
If I wanted to be more explicit I could do this
cat sample.bin | dd if=- of=- destination.bin
cat sample.bin | dd if=- of=destination.bin
would copy the file in chunks of 512 bytes. The pipe symbol has
a buffer which is larger than that, so the chunks are probably
of no consequence. Running dd with default bs=512 is usually
pretty slow and only does around 13MB/sec.
There's also things like clonezilla. For example, making an exact
copy of one terabyte disk to a second terabyte disk. Sometimes you
get lucky, and they're the same size. Since you have the
sudo fdisk command to check the exact size of each drive, you
can check the drives before deciding what to do. I've not used
clonezilla, so cannot give a rundown on any "tricks".