[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: dd - proper use or more suitable program



On 11/11/2016 10:45 PM, Andy Smith wrote:
Hi Richard,

On Fri, Nov 11, 2016 at 03:31:21PM -0600, Richard Owlett wrote:
How big might the logfile be when trying to recover a known flaky 300
GB drive. I've lots of space? Some convienient, some not.

TL;DR: this depends on how many bad sectors you expect to find. If
the number is likely to be low then the map file should be a matter
of kilobytes in size.

Based on your example calculations I should be in good shape. Only one partition [the old c: drive] seems to be in bad shape. I've found some tutorial material that clears up enough that I'm confident of running safely even if not optimized.





I've never looked into this before as it's never been an issue for
me, but looking at:

     https://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html#Mapfile-structure

The header of the map file looks like:

      # Mapfile. Created by GNU ddrescue version 1.21
      # Command line: ddrescue -d -c18 /dev/fd0 fdimage mapfile
      # Start time:   2015-07-21 09:37:44
      # Current time: 2015-07-21 09:38:19
      # Copying non-tried blocks... Pass 1 (forwards)
      # current_pos  current_status
      0x00120000     ?
      #      pos        size  status

…which is 304 bytes.

After that there is one line for each range of blocks depending on
their status (finished, not tried yet, failed etc).

I am thinking that the absolute worst case in terms of maximal
number of lines in this file would be if every other sector were
failed, so you'd have an alternating sequence of:

0x00000000  0x00000001  +
0x00000001  0x00000001  -
0x00000002  0x00000001  +
0x00000003  0x00000001  -

for the entire device. That's 52 bytes for every two blocks.

The default block size is 512 bytes in ddrescue, so two blocks
covers 1024 bytes of your device.

If your device is 300 gigabytes in size—and I'll assume that is SI
power of ten giga- (not binary power of two gibi-) as is common with
drive manufacturers, so 300,000,000,000 bytes—then that's
300,000,000,000 / 1,024 = 292,968,750. That times 54 bytes is
15,820,312,500 bytes. Or 14.73GiB. Plus a ~304 byte header.

As far as I can see that is the absolute worst case and for a more
realistic scenario of a device with only a couple of bad sectors
you'd be looking at mere kilobytes of map file size.

For example, if merely 1% of the sectors were bad (and I would
suggest that even that would represent a catastrophically damaged
device that you will find very difficult to extract any sense out
of) then you'd still only be looking at a map file with 5,859,375
bad blocks in it (5,859,375 bad sectors out of 585,937,500 total
512-byte sectors in a 300,000,000,000 byte device). This would
require 5,859,376 different ranges in the map file, with each range
being 27 bytes, so 27 * 5,859,376 = 158,203,152 bytes = 150.9MiB.

I doubt you will see 5.9 million bad sectors on your 300G drive!

Basically whenever my destination has had noticeably more space than
the source device I haven't spared a thought to this so have never
worked it out before. I think the above is correct but look forward
to a correction from anyone who knows better.

Also do note that should you run out of space when writing the map
file, you still have the map file that has been written to date, so
you can extricate yourself from the situation and rerun ddrescue,
safe in the knowledge that it will pick up from where it got to.

If you are expecting serious numbers of bad sectors then your most
precious resource may actually be time. ddrescue tries REALLY HARD
to read a bad sector with each try potentially taking 2 or more
minutes. So on the hypothetical "1% broken" drive with 5.9 million
bad sectors, a single pass could take upwards of 10 million minutes
(19 years). And sometimes multiple passes are required to read a bad
sector.

Cheers,
Andy




Reply to: