EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Disk imaging strategy

Started by Don Y November 2, 2014
Jan Panteltje wrote:
> On a sunny day (Sun, 02 Nov 2014 12:06:10 -0700) it happened Don Y >>>Not quite sure whatyouwant, but I have done this a lot: >>>start some recue disk, plug in some USB disk. >>>mount the partition you want, then: >>>tar -zcvf partition_sda1_image.tgz /dev/sda1
[I hope you mean 'tar -zcvf partition_sda1_image.tgz /mnt' or something like that; otherwise, you'll get *really* good compression.]
>>The problem is creating "partition_sda1_image.tgz" *without* being concerned >>with the underlying filesystem. So, you have no knowledge (from the filesystem >>layer) of the "valid" contents of the volume (vs. blank/deleted content). > > Sure you can dd that partition, but now you really are in trouble. > Its safer to tar a filesystem (that should NOT be currently running, else you are in trouble too), > you can always untar it into an other filesystem (ext2, ext4, reiserfs, etc) that is compatible.
It depends. 'dd'ing the raw partition is almost guaranteed to produce a working image after unpacking. If you 'tar' a mounted file system, the operating system you run the 'tar' on must suppot all nuances of the file system you want to clone. Back in Win95 days, cloning (or backup/restore) a Win95 installation using 'tar' from Linux did not work, because it did not restore all required file attributes. I wouldn't expect Linux 'tar' to capture all NTFS attributes (like "compressed", ACLs, ADS) as well. Copying the partition blockwise would not have all these problems. Here's an interesting read that also shows some of the problems of copying large amounts of data file-wise: http://lists.gnu.org/archive/html/coreutils/2014-08/msg00012.html But in this case, the user couldn't avoid it. Stefan
On Sun, 02 Nov 2014 20:24:17 -0700, Don Y <this@is.not.me.com> wrote:

>Hi George, > >On 11/2/2014 8:11 PM, George Neuner wrote: >> On Sun, 02 Nov 2014 12:02:44 -0700, Don Y <this@is.not.me.com> wrote: >>> If the machine can access the medium, then what do I care about the >>> hardware interface? >> >> The problem with dd is that there is no actual guarantee that what you >> read will work if written back. Even under the "raw" block devices >> there is a lot of translation going on. > >dd(1) is an abbreviation for "low level access to block device". >I'm not running a UN*X (or any other OS).
Then you have to be working directly with the drive interface because the BIOS block interface in many systems isn't able to fully address a large drive. I'm sure you're familiar with the [paraphrased] warning: "this partition may not be bootable because it starts or lies partly above XXXX cylinders". That's telling you the BIOS interface can't handle it.
>>> All you need to do is ensure that whatever you write is very compressible. >>> Much more so than "unconstrained DEADBEEF". E.g., you could tailor your >>> compressor to recognize the 512 byte sequence: >>> "123234u349tuepdfjg;skjdgpa9sufwrtd....sdklfsopriujh" >>> and replace it with a one byte "sector is empty" code (where "empty" >>> really means "contains the aforementioned 512 byte sequence") >> >> And also means "doesn't need to be restored". > >Well, *likely* doesn't need to be restored (depends on how unique that >string can be -- "Copyright 2014 Microsoft" would probably be a bad choice...)
Try MP4s of the top 40 rap music videos ... you'll wind up erasing everything on the drive 8-) George
On a sunny day (Mon, 03 Nov 2014 10:17:18 -0700) it happened Don Y
<this@is.not.me.com> wrote in <m38db2$nmp$1@speranza.aioe.org>:

>>> But, this comes at the cost of not knowing which parts (sectors) of the >>> medium actually have "content" that must be preserved! So, you have to >>> include every sector in your image! >> >> >> Not if you tar a filesystem (on that partition) see the script I posted. >> So >> mount partition, say: >> mount /dev/sdd1 /mnt/sdd1 > >mount(8) brings filesystem specific code into the environment. >Tell me how you are going to do this WITHOUT invoking the mount command!
??? Did you read I typed 'mount'??? Maybe you should read my other replies where I stated you need s similar file system for 'regeneration' for exampel file name length.
>> tar that filesystem: >> tar -zcvf my_sdd1_backup.tgz /mnt/sdd1/* >> >> If the partition has no files the tgz will be very very small. > >Try gzip'ing /dev/sdd1 and look at the size of the resulting file! >(i.e., /dev/sdd1 being the raw/block device without ANY knowledge >of the filesystem it is currently supporting!)
When you use mount it will TELL you the filesystem. I have tgz'ed whole partitions of real servers no problem, made a new partition on my laptop, untarred it there, added a grub boot entry, and it runs there too, without kernel recompile, but OK only in normal resolution as the old kernel has no drivers for my super new 2 graphics card laptop. But very useful to run all my scripts. And as backup.
>> If you want it back, >> create any filesystem, and tar -zxvf my_sdd1_backup.tgz >> >> All links and timestamps sare preservd too that way. >> >>>> The other thing I have noticed is that when copying partitions of disk >>>> images to bluray with my LG writer, some bytes at the start seem to get >>>> changed, could be an error, but this does not seem to happen when copying >>>> to a filesystem. >>>> The other thing is in case restore of an image to a similar device, >>>> I found that for example 8 GB card 1 (as source) has a different _real_ >>>> size than 8 GB card 2 (same make, same type, same specified size, bought >>>> at the same time) this could be related to bad sector managing of FLASH. >>> >>> Experience teaches you to always leave a bit of the medium "unused" >>> (i.e., not present in ANY partition) to accommodate "small" variations >>> between drives. Many megabytes on a 100G drive is "noise". Likewise, >>> a GB on a TB drive is similarly "noise".
That only matters if you copy images. Keep a good documentation of the images or tgzs you create and you KNOW the sizes.
>>> I've found this easier than worrying about resizing partitions (which >>> typically requires the partition to have been "defragmented" beforehand >>> to ensure there is nothing present at the tail end of the partition). >> >> I think I have not 'defragmented' anything ever in my life in Linux, >> there is no need. > >Then, you can't arbitrarily shrink a "filesystem" because you don't know >where the "live data" resides on it, currently. A file could sit in >the last N sectors of the partition and you wouldn't know it. Shrinking >the partition by M>N sectors means your file gets cut off the end!
Maybe you live on an other planet, but on mine I have found that disk sizes always increase, same for (SD)card sizes, I think nobody is going to copy an image to smaller disks. I have made copies that way by dd if=/dev/sda of=my_sda_backup.img and then dd it to an other harddisk. Problem there is you f*ck up the pertition table too. If you do it for one at the time partition only you do not have that problem, but then you have the huge file sizes, even if there is nothing of value. For small things this works OK, for example Navigatrix on a 16 GB USB stick I have made several copies that way, But on a 1 TB disk with 250 and 500 GB partitions it is not practical. If you really want to clone for students you destroy all old work on the target that way. It is then much safer to use some backup utility as others pointed out.
>(you have to explicitly or implicitly MOVE the file to ensure it doesn't >fall past the end of the trimmed partition)
No, you must live in an other world.
>When you install that disk in your machine, you are going to discover that >you can't "mount" it! I have changed the partition ID to some wacky value >that the system from which I pulled it recognizes as "Customized FFSv2 >Partition". The only thing that is really "customized" about it is this >oddball partition type identifier *and* a macro wrapper that causes each >reference to an inode to refer, instead, to "~inode".
You should not change partition IDs, not mess at that level with the system. Why do that? File systems are there to allow many files. I have stored many movies as image on DVD-R,... but that is different, sectors sequential, no authoring... write on the disk what it is and how to play it back, here an entry from my database, just a text file basically: 814 DVD+R Verbatim inkjet printable 16x NEC 7173 the_gumball_rally.ts as image -rw-r--r-- 1 root root 3153536064 2011-04-03 18:45 the_gumball_rally.ts Burned 2.4x That is a transport stream file (as recorded from satellite) with all PIDs relevant to that program in it, often including teletext (ceefax) for that day You can play that with mplayer /dev/dvd All disk are numbered, all cards and USB sticks are numbered, all is in the database.
>You, OTOH, can best hope to do something like: > dd if=/dev/raw_drive | gzip > image.gz >And, your image.gz will typically be much larger because it will not >be able to determine which is "deleted data" in the raw disk contents.
Deleted data is a filesystem specific issue. You are confusing 2 things. Either you make an image with everything (compressed or not), or you use a filesystem and compress the current files only. Deleted or not deleted and what is put in the sectors that are delected is filesystem specific. And if you do not even know what filesystem you are using you should not be messing with disks on a computah at all. Really,.
On a sunny day (Mon, 03 Nov 2014 12:11:28 -0500) it happened George Neuner
<gneuner2@comcast.net> wrote in <tfdf5a1k19hp1sahlpqrno5limksb0oguv@4ax.com>:

>On Mon, 03 Nov 2014 10:09:48 GMT, Jan Panteltje ><pNaonStpealmtje@yahoo.com> wrote: > > >>I think I have not 'defragmented' anything ever in my life in Linux, >>there is no need. > >That isn't entirely true - at least not with inode filesystems. The >n-way tree structuring and inode caching reduce the need to >defragment, but where sequential read performance is important, it >still pays to defragment. > >George
Almost all my systems use reiserfs, and most of it is full with video content for video editing and very long files (several GB each). Most is sequential I have never ever had a speed problem even with < 1 GHz processor. What is funy is that I bought a key for the Raspberry Pi MPEG2 decoder, and somehow was running it at full speed from SDcard (not an high end), and even then it was running on that 500 GHz or so processor at > 50 fps. The raspi does HD too from SDcard. For video the codecs and maybe the graphics card limit the speed. When I run Linux transcode that sets the speed (ffmpeg etc), It is possible the occasional seek on a harddisk happens, but for example when using mplayer I always use -cache some megabytes, Linux caches everything anyway. So it really makes no difference in performance at all in my experience.
On Mon, 03 Nov 2014 18:47:37 GMT, Jan Panteltje
<pNaonStpealmtje@yahoo.com> Gave us:

>even then it was running on that 500 GHz or so processor at > 50 fps
. ^^^^^^^ Is that all you could get it to do? Why didn't you calculate the TOE for us with it?
On a sunny day (Mon, 03 Nov 2014 18:56:05 +0100) it happened Stefan Reuther
<stefan.news@arcor.de> wrote in <m38j45.1c0.1@stefan.msgid.phost.de>:

>Jan Panteltje wrote: >> On a sunny day (Sun, 02 Nov 2014 12:06:10 -0700) it happened Don Y >>>>Not quite sure whatyouwant, but I have done this a lot: >>>>start some recue disk, plug in some USB disk. >>>>mount the partition you want, then: >>>>tar -zcvf partition_sda1_image.tgz /dev/sda1 > >[I hope you mean 'tar -zcvf partition_sda1_image.tgz /mnt' or something >like that; otherwise, you'll get *really* good compression.] > >>>The problem is creating "partition_sda1_image.tgz" *without* being concerned >>>with the underlying filesystem. So, you have no knowledge (from the filesystem >>>layer) of the "valid" contents of the volume (vs. blank/deleted content). >> >> Sure you can dd that partition, but now you really are in trouble. >> Its safer to tar a filesystem (that should NOT be currently running, else you are in trouble too), >> you can always untar it into an other filesystem (ext2, ext4, reiserfs, etc) that is compatible. > >It depends. > >'dd'ing the raw partition is almost guaranteed to produce a working >image after unpacking. If you 'tar' a mounted file system, the operating >system you run the 'tar' on must suppot all nuances of the file system >you want to clone.
Of course, I think I mentioned that compatibility requirement.
>Back in Win95 days, cloning (or backup/restore) a >Win95 installation using 'tar' from Linux did not work, because it did >not restore all required file attributes. I wouldn't expect Linux 'tar' >to capture all NTFS attributes (like "compressed", ACLs, ADS) as well.
I take your word for it, I left MS software in 1998 when I found a copy of SLS Linux on a CD. I do have an old system with win 98 in a partition.. win 98 runs, but the screen is low res, it has no driver for the newer graphics card, or I did not look hard enough. It does have a driver for my Canon flatbed scanner though, something Linux does not. Is is a > 10 year old Seagate that was on 24/7, so far it still seems error free. I did copy that partition with dd to some place as backup, but hey, I still have the original win98 disk.... if things go wrong. Not sure I would bother though...
>Copying the partition blockwise would not have all these problems.
True
>Here's an interesting read that also shows some of the problems of >copying large amounts of data file-wise: >http://lists.gnu.org/archive/html/coreutils/2014-08/msg00012.html >But in this case, the user couldn't avoid it.
That is why I tar things, for MS widnows I really do not know if that works. I burned my Xp disk and made a video of it. It is available for 98 Euro, before playing it you need to glue a sticker on your PC blah blah Men that Xp sucked. I have heard MS has gotten worse since.
On 11/3/2014 11:39 AM, Jan Panteltje wrote:
> On a sunny day (Mon, 03 Nov 2014 10:17:18 -0700) it happened Don Y > <this@is.not.me.com> wrote in <m38db2$nmp$1@speranza.aioe.org>: > >>>> But, this comes at the cost of not knowing which parts (sectors) of >>>> the medium actually have "content" that must be preserved! So, you >>>> have to include every sector in your image! >>> >>> >>> Not if you tar a filesystem (on that partition) see the script I >>> posted. So mount partition, say: mount /dev/sdd1 /mnt/sdd1 >> >> mount(8) brings filesystem specific code into the environment. Tell me how >> you are going to do this WITHOUT invoking the mount command! > > ??? Did you read I typed 'mount'???
Yes! And you should read that I said "REGARDLESS OF THE FILESYSTEM(s) contained thereon". You've missed the very ESSENCE of my question! By typing "mount", you are relying on mount's UNDERSTANDING of the filesystem. What do you do when you type "mount" and you get a reply: Operation not supported by device (or, whatever the equivalent "kernel lacks support for the filesystem indicated by the device's contents)? I.e., add a line after your "mount" in your script that begins with: if [ "$?" -ne "0" ]; then echo "Gee, I can't mount that sort of filesystem! I'll handle \ this in some other way..." and contains whatever commands you deem necessary to create that image!
> Maybe you should read my other replies where I stated you need s similar > file system for 'regeneration' for exampel file name length. > >>> tar that filesystem: tar -zcvf my_sdd1_backup.tgz /mnt/sdd1/* >>> >>> If the partition has no files the tgz will be very very small. >> >> Try gzip'ing /dev/sdd1 and look at the size of the resulting file! (i.e., >> /dev/sdd1 being the raw/block device without ANY knowledge of the >> filesystem it is currently supporting!) > > When you use mount it will TELL you the filesystem.
No, it won't. it will only tell you which filesystems it *recognizes* (typically by examining magic numbers). And, will only actually *do* the mount if the specific mount_<fstype> is executable in your system. I have a laptop in my hands. (actually, I have six of them) They were purchased in their current state. The partition table contains partitions having magic numbers of 0xde, 0x07, 0x0f and 0xdb. How does *your* script handle them? Take your time replying. Make sure you look at the sources instead of just SPECULATING about how THOSE SPECIFIC FS TYPES are handled! You may be surprised. (Hint, a reasonably current Clonezilla won't recognize them!)
> I have tgz'ed whole partitions of real servers no problem, made a new > partition on my laptop, untarred it there, added a grub boot entry, and it > runs there too, without kernel recompile, but OK only in normal resolution > as the old kernel has no drivers for my super new 2 graphics card laptop. > But very useful to run all my scripts. And as backup.
That's not the question I posed: "I went skiing in Switzerland"
>>> If you want it back, create any filesystem, and tar -zxvf >>> my_sdd1_backup.tgz >>> >>> All links and timestamps sare preservd too that way. >>> >>>>> The other thing I have noticed is that when copying partitions of >>>>> disk images to bluray with my LG writer, some bytes at the start >>>>> seem to get changed, could be an error, but this does not seem to >>>>> happen when copying to a filesystem. The other thing is in case >>>>> restore of an image to a similar device, I found that for example 8 >>>>> GB card 1 (as source) has a different _real_ size than 8 GB card 2 >>>>> (same make, same type, same specified size, bought at the same time) >>>>> this could be related to bad sector managing of FLASH. >>>> >>>> Experience teaches you to always leave a bit of the medium "unused" >>>> (i.e., not present in ANY partition) to accommodate "small" >>>> variations between drives. Many megabytes on a 100G drive is "noise". >>>> Likewise, a GB on a TB drive is similarly "noise". > > That only matters if you copy images. Keep a good documentation of the > images or tgzs you create and you KNOW the sizes.
Have you missed the entire point of this thread? Even the subject line makes that pretty clear.
>>>> I've found this easier than worrying about resizing partitions (which >>>> typically requires the partition to have been "defragmented" >>>> beforehand to ensure there is nothing present at the tail end of the >>>> partition). >>> >>> I think I have not 'defragmented' anything ever in my life in Linux, >>> there is no need. >> >> Then, you can't arbitrarily shrink a "filesystem" because you don't know >> where the "live data" resides on it, currently. A file could sit in the >> last N sectors of the partition and you wouldn't know it. Shrinking the >> partition by M>N sectors means your file gets cut off the end! > > > Maybe you live on an other planet, but on mine I have found that disk sizes > always increase, same for (SD)card sizes, I think nobody is going to copy an > image to smaller disks.
Disk sizes *can* increase. Partition sizes (which is what we are concerned with; filesystems don't deal with "disks" but, rather, *partitions*) can move up or down at will. Last month, I moved a NetBSD system that I initially created as a single partition on a 12G (!) disk onto newer hardware. The 12G single partition had value when the media was only 12G -- why risk partitioning it into multiple partitions and possibly ending up with "extra space" in one partition and "not enough" in another? New disk was 640G. Foolish to treat the entire disk as a single partition. Equally foolish to treat it as a 12G partition (mounted as /) with a 628G partition (mounted as /ExtraSpace). So, *shrink* the partition to a suitable size for the new volume even though the new volume is 50 times larger!
> I have made copies that way by dd if=/dev/sda of=my_sda_backup.img and then > dd it to an other harddisk. Problem there is you f*ck up the pertition table > too. If you do it for one at the time partition only you do not have that > problem, but then you have the huge file sizes, even if there is nothing of > value. > > For small things this works OK, for example Navigatrix on a 16 GB USB stick > I have made several copies that way, But on a 1 TB disk with 250 and 500 GB > partitions it is not practical. If you really want to clone for students you > destroy all old work on the target that way. It is then much safer to use > some backup utility as others pointed out. > >> (you have to explicitly or implicitly MOVE the file to ensure it doesn't >> fall past the end of the trimmed partition) > > No, you must live in an other world.
I have a disk. It currently contains 1% of its capacity as "in use". The *one* file just happens to be located in sector 100 (out of 100 total sectors in that partition). I now want to shrink the partition to be 90 sectors. WITHOUT MOVING THE SECTOR'S CONTENTS (because you claim that is not required on YOUR planet), how do you do this?
>> When you install that disk in your machine, you are going to discover >> that you can't "mount" it! I have changed the partition ID to some wacky >> value that the system from which I pulled it recognizes as "Customized >> FFSv2 Partition". The only thing that is really "customized" about it is >> this oddball partition type identifier *and* a macro wrapper that causes >> each reference to an inode to refer, instead, to "~inode". > > You should not change partition IDs, not mess at that level with the > system. Why do that?
Ask the appliance manufacturer why he wants to introduce a new/proprietary partition format! Do you think he is going to put a label on the outside of the box that says, "Warning! Proprietary filesystem format used. You won't be able to use <your_favorite_toolchain> to manage this media!"?
> File systems are there to allow many files. I have stored many movies as > image on DVD-R,... but that is different, sectors sequential, no > authoring... write on the disk what it is and how to play it back, here an > entry from my database, just a text file basically: 814 DVD+R Verbatim > inkjet printable 16x NEC 7173 the_gumball_rally.ts as image -rw-r--r-- 1 > root root 3153536064 2011-04-03 18:45 the_gumball_rally.ts Burned 2.4x > > That is a transport stream file (as recorded from satellite) with all PIDs > relevant to that program in it, often including teletext (ceefax) for that > day You can play that with mplayer /dev/dvd > > All disk are numbered, all cards and USB sticks are numbered, all is in the > database.
What does this have to do with my question? (Jeez, and Clifford claims I don't "read carefully"...)
>> You, OTOH, can best hope to do something like: dd if=/dev/raw_drive | gzip >> > image.gz And, your image.gz will typically be much larger because it >> will not be able to determine which is "deleted data" in the raw disk >> contents. > > Deleted data is a filesystem specific issue. You are confusing 2 things. > Either you make an image with everything (compressed or not), or you use a > filesystem and compress the current files only. > > Deleted or not deleted and what is put in the sectors that are delected is > filesystem specific. And if you do not even know what filesystem you are > using you should not be messing with disks on a computah at all. Really,.
You should probably go back and dig through the sources. Start with mount(8) so you understand the concept of how different filesystems are detected *by* mount. Use the four filesystem types mentioned above for examples so you know how mount *will* handle them! Then look through the various mount_*(8) exectuables for more specific issues related to each specific filesystem implementation. *Then* tell me how you're going to know which "files" are present on the media -- and where they reside! So your "image" contains them and not "deleted cruft". Until then, you're just speculating.
On 11/3/2014 10:56 AM, Stefan Reuther wrote:

>> Sure you can dd that partition, but now you really are in trouble. >> Its safer to tar a filesystem (that should NOT be currently running, else you are in trouble too), >> you can always untar it into an other filesystem (ext2, ext4, reiserfs, etc) that is compatible. > > It depends. > > 'dd'ing the raw partition is almost guaranteed to produce a working > image after unpacking. If you 'tar' a mounted file system, the operating > system you run the 'tar' on must suppot all nuances of the file system > you want to clone. Back in Win95 days, cloning (or backup/restore) a > Win95 installation using 'tar' from Linux did not work, because it did > not restore all required file attributes. I wouldn't expect Linux 'tar' > to capture all NTFS attributes (like "compressed", ACLs, ADS) as well. > Copying the partition blockwise would not have all these problems.
Exactly. But, you don't want your image to HAVE TO BE as large as the original. Esp as most disks have a fair bit of unused space. Hence the problem I posed: how do you sort out what is "unused space" from "used space" -- in a manner that allows you to ignore the actual metadata/etc. imposed by the particular filesystem implementation. E.g., I have a list of >100 different fstypes. Why even bother to sort out what they all mean and how they all perform? What's the likelihood that you will CORRECTLY (bug free) implement handlers for each of those types?? OTOH, dd | gzip preserves everything about the filesystem "ignorantly": "<shrug> Nah, I don't know what all these bits mean. I just make sure I've got EVERY LAST ONE OF THEM!" I contend that you can improve the "dd|compress" approach by putting highly compressible data into the "unused" sectors of the media. In doing so, you EFFECTIVELY remove the unused sectors from the image that you create. In much the same way that a fs-aware utility effectively notes "this sector was not copied because it was not in use"
> Here's an interesting read that also shows some of the problems of > copying large amounts of data file-wise: > http://lists.gnu.org/archive/html/coreutils/2014-08/msg00012.html > But in this case, the user couldn't avoid it.
Hi George,

On 11/3/2014 10:57 AM, George Neuner wrote:

>>>> If the machine can access the medium, then what do I care about the >>>> hardware interface? >>> >>> The problem with dd is that there is no actual guarantee that what you >>> read will work if written back. Even under the "raw" block devices >>> there is a lot of translation going on. >> >> dd(1) is an abbreviation for "low level access to block device". >> I'm not running a UN*X (or any other OS). > > Then you have to be working directly with the drive interface because > the BIOS block interface in many systems isn't able to fully address a > large drive.
All the machines I've seen handle LBA48 in the BIOS. I think that goes back at least 10 years (?). Anything that old wouldn't be worth the time to install an OS! :-/ (let alone trying to find drivers, etc.). I think the partition table craps out at 2-4TB (but, unlikely to have that large a disk spinning on the boot drive for anything *I* will ever\ use -- or encounter in my pro bono gigs!)
> I'm sure you're familiar with the [paraphrased] warning: "this > partition may not be bootable because it starts or lies partly above > XXXX cylinders". That's telling you the BIOS interface can't handle > it.
Some of my SPARCs still have the 2G limit on the bootstrap's location.
>>>> All you need to do is ensure that whatever you write is very compressible. >>>> Much more so than "unconstrained DEADBEEF". E.g., you could tailor your >>>> compressor to recognize the 512 byte sequence: >>>> "123234u349tuepdfjg;skjdgpa9sufwrtd....sdklfsopriujh" >>>> and replace it with a one byte "sector is empty" code (where "empty" >>>> really means "contains the aforementioned 512 byte sequence") >>> >>> And also means "doesn't need to be restored". >> >> Well, *likely* doesn't need to be restored (depends on how unique that >> string can be -- "Copyright 2014 Microsoft" would probably be a bad choice...) > > Try MP4s of the top 40 rap music videos ... you'll wind up erasing > everything on the drive 8-)
I've been told that causes the heads to drop onto the platters WITH EXTREME PREJUDICE! :>
Hi Dimiter,

On 11/2/2014 10:49 PM, Dimiter_Popoff wrote:
> On 02.11.2014 &#1075;. 17:25, Don Y wrote: >> I'm writing a bit of code to image disk contents REGARDLESS OF THE >> FILESYSTEM(s) contained thereon.
> since obviously there is no common solution to all filesystems > (unless you want to copy the entire medium which is impractical), > your best bet is to go minimialistic about it. Recognize which > file system this is, then find your way to allocated space and > store it in some indexed format - such that you can subsequently > recover it. > On some filesystems it will be easier than on others - e.g. on DPS > you will need to locate a file in the root directory, unitcat.syst, > which is a bitmap of the allocated clusters; and you have to read > logic block 0 to see how large the "device" (i.e. partition) is, > what block size does it assume and how many blocks are there per > cluster. On FAT it will be easier I think (no need to do root > or any directory). > But you can't get around this minimum I suppose. Then there are not > that many filesystems in mass use anyway (I think George Neuner already > said that), so the effort will not be that huge.
I'm going to GUESS that your FS is "proprietary" (not mainstream). So, a potential test for my envisioned approach! :> First, can a user create files having arbitrary names and contents under your FS? Can he copy & rename files? E.g., could he <somehow> introduce a file having some particular contents (like "DELETEDDELETEDDELETEDDELETED...") to your FS? Then, could he replicate it many times? (copy to a different filename) Having done that until the copy failed ("No space left on device"), presumably, he could delete each of them? (perhaps made simpler by creating them all in a single subdirectory/folder and then just deleting the folder AND its contents) Could I then examine your disk AT THE SECTOR LEVEL and expect to find lots of "DELETEDDELETEDDELETED..." in sectors? In doing so, effectively know which sectors are currently "unused"? (or, at the very least, safe to restore with "DELETEDDELETEDDELETED..." as their contents WITHOUT actually having to store an instance of "DELETEDDELETEDDELETED..." for EACH such sector? [BTW, we are now below 25C. Almost feels comfortable!! :> ]
The 2026 Embedded Online Conference