EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Disk imaging strategy

Started by Don Y November 2, 2014
Don - using a "boot loader" approach does sound good, especially if you're 
using it often.

Hul

In comp.arch.embedded Don Y <this@is.not.me.com> wrote:
> On 11/2/2014 3:13 PM, Hul Tytus wrote: > > With an unprotected system like MSDOS booted on a floppy or a flash disk, > > a disk editor can copy the sectors on one partition to another. Simtel\msdos > > has those editors, I believe, but searching for a simtel site is required. > > The simplest procedure is to format the first half of a disk and use the > > other half for the backup image.
> What I currently do is similar -- except no DOS, etc. (just write a boot > loader that effectively does the decompress & copy without the overhead > of a "real OS").
> Not using compression is highly wasteful of disk space (for the "restore > image"). If the image is to co-reside on the medium with the live data, > then it'd be nice not to have to "throw away" half the medium for this > "feature"
> E.g., the laptops that I build for students tend to have ~160G drives > that I can cut into a "system" partition (which I want to be able to > restore on-demand) as well as a "data" partition (which I will leave > to the student to maintain... if their data gets clobbered, that's > THEIR problem; at least the machine will still be runnable after > recovery)
Hi Hul,

On 11/3/2014 4:02 PM, Hul Tytus wrote:
> Don - using a "boot loader" approach does sound good, especially if you're > using it often.
I am always leary of anything to do with PC's and their ilk. They seem to undergo frequent fundamental changes at times "unnecessarily" -- as if to change just for the sake of change! <frown> So, an approach that I *think* will work well with the machines that I have available to me *today* may end up completely useless with the next model year, etc. If you're doing something "for yourself", this is a manageable risk -- you simply decide when the added effort to "chase the newest" is worth the effort TO YOU. OTOH, when you are doing something for *others*, it's really hard not to just throw up your hands and say, "Sorry, I've got better things to do with my time than REDO something that was already working!"
On Sun, 02 Nov 2014 22:21:59 -0800, DecadentLinuxUserNumeroUno
<DLU1@DecadentLinuxUser.org> wrote:

>On Sun, 02 Nov 2014 22:08:06 -0800, DecadentLinuxUserNumeroUno ><DLU1@DecadentLinuxUser.org> Gave us: > >> >> Hey, you could image your drive like this guy did. >> >> He made a video "image" of his drive. Hehehehe... BRL! > > > Hey guys! > > Image THIS quarter million dollar hard drive! > >http://www.youtube.com/watch?v=CBjoWMA5d84 > > I really wish I had it. Damned Aussie lucky dogs! > > He talks funny too. :-)
I've seen that before, unfortunately he's off on some of the details. First it's a 3390 module (an "HDA", which was actually two drives or "actuators"), so it's from no earlier than 1989 (not late 70s/early 80s), and it's not 10MB, it's about 1.89GB (assuming it's a model-1), or 3.78GB (if it's a double density model-2), and there were additional models of higher capacity later. It's also not worth $250K - you could buy an entire -B28 for $275K at the time, and that contained six double density modules (HDAs) of the type he disassembled. You'd usually buy a "string" of three units (one -Axx and two -Bxx units), for a total of 16 HDAs, which would set you back about $750K. So the value is more along the lines of $50K (assuming the enclosure is free). You'd also need a controller. Still a cool tear-down.
On Mon, 03 Nov 2014 09:27:52 -0800, DecadentLinuxUserNumeroUno
<DLU1@DecadentLinuxUser.org> wrote:

>On Mon, 03 Nov 2014 12:11:28 -0500, George Neuner <gneuner2@comcast.net> >Gave us: > >>On Mon, 03 Nov 2014 10:09:48 GMT, Jan Panteltje >><pNaonStpealmtje@yahoo.com> wrote: >> >> >>>I think I have not 'defragmented' anything ever in my life in Linux, >>>there is no need. >> >>That isn't entirely true - at least not with inode filesystems. The >>n-way tree structuring and inode caching reduce the need to >>defragment, but where sequential read performance is important, it >>still pays to defragment. >> >>George > > There would be no fragmentation unless those sequentially read files >were constantly being opened and added to, and even THOSE file writes >are full commits, free of fragmentation on those file systems. > > Kind of like saying "inconceivable". > > "I do not think that word means what you think it means." > > Sequential read performance is ONLY degraded on FILE reads of >fragmented files. > > So unless you are operating a database, and keep all your dynamic data >on the same volumes as you system and static files, you would see the >same number, even if the volume does have some fragmented files on it. > > But again, you speak of the file system with seeming good intimacy. > > But I was under the impression that this file system operates in such >a way that fragmentation like that which occurs on a FAT type system, >never happens. > > You are saying EXT fs DO fragment files?
Of course it can, and it does. Unless you can imagine some way it could always assure a contiguous allocation for a file, whether written all at once, or in parts, it will have to fragment. There are, of course, various strategies to reduce fragmentation, most basically begin some additional cleverness it selection the next disk block to add to a file, but even some DOS/FAT systems did some of those things. In the case of ext#, rather more aggressive anti-fragmentation strategies are in place (most notably consecutively created files are allocated somewhat scattered across the disk, making it likelier that there will be unused blocks immediately after the file when additional writes occur), but they still break down as the disk fills up. If you're running ext4, try an "e4defrag -v" to see just how much fragmentation is on your volume. Or do an "fsck -nvf /dev/hda###". For any of the ext FS's, that should report a number of non-continguous files. OTOH, the "need" for defragmentation, even on FAT volumes, is greatly overstated. Yes sometimes you do, but nothing like what's generally assumed.
> I think the actual file sizes might play into one's thinking here too. > > Sequentially reading large scattered chunks is not that hard either. >It is the database file that has had 50 0.5 kB commits done on it in the >last hour that fragment a FAT drive. > > Unchanging files do not fragment. The "holes" between them and the >deleted files do not pose a huge problem either. It is that ONE file >that has so many segmented locations to string together in a single >"read". > > Still... I did not know that ext fs drives fragment.
On Mon, 03 Nov 2014 17:52:40 -0600, Robert Wessel
<robertwessel2@yahoo.com> Gave us:

> whether >written all at once, or in parts, it will have to fragment.
Only on a FAT type system. The "previously in use" sectors of a deleted ext2 file become immediately available AFAIR. Things seldom "have to fragment" and if it is a large file, those "fragments are going to be HUGE, and are NOT what the term was coined for.
On 2014-11-04, DecadentLinuxUserNumeroUno <DLU1@DecadentLinuxUser.org> wrote:
> On Mon, 03 Nov 2014 17:52:40 -0600, Robert Wessel ><robertwessel2@yahoo.com> Gave us: > >> whether >>written all at once, or in parts, it will have to fragment. > > Only on a FAT type system. The "previously in use" sectors of a > deleted ext2 file become immediately available AFAIR. Things seldom > "have to fragment" and if it is a large file, those "fragments are going > to be HUGE, and are NOT what the term was coined for.
No, on _any_ general purpose file system. It is inevitable. You seem to be of the impression that Linux is somehow the ultimate system, I've noticed it in a couple of threads now. It isn't and it certainly can't do the impossible. The only sure ways to avoid fragmetnation impose a loss of generality. ISTR at least one of the mainframe systems required you to specify the maximum size on file creation so the space could be set aside for it (may have been MVS but I may be wrong, it's before my time) but that isn't compatible with the way most software operates that expect files to be growable. Similarly some WORM systems never fragment since a modified file gets re-written in its entirety (e.g. multisession ISO9660) but the cost in performance and storage space would again be unattractive for general use. FAT was particularly prone to fragmentation by modern standards thanks to its heritage - it was designed for small floppy disks and the principle design constraint was memory consumption. To that end the free list wasn't sorted in any manner which is where the bulk of the fragmentation came from. On the other hand a file has to grow by a block but the block after the end of the file if _has_ to be fragmented - well, you could move one or both files but if they're 2GB a piece that cost more than a small amount of fragmentation ever would. Modern file systems are designed according to a range of criteria and minimising fragmentation is on that list but it isn't the most important criteria or even top of the list where performance is concerned. Techniques such as cylinder grouping improve access time but _cause_ fragmentation to some extent. If a file is larger than a group it will be fragmented, even if it is the only file on the disk. None of this really matters except to the "my system is better than yours" advocates. Obviously if that 2GB file is broken down into a million fragments averaging 2K a piece that is a problem and you could find yourself in situations almost that bad with FAT. On the other hand if it's broken into a hundred fragments of 20MB each no-one should really care for most purposes. -- Andrew Smallshaw andrews@sdf.lonestar.org
On Tue, 4 Nov 2014 01:31:50 +0000 (UTC), Andrew Smallshaw
<andrews@sdf.lonestar.org> Gave us:

>No, on _any_ general purpose file system. It is inevitable. You >seem to be of the impression that Linux is somehow the ultimate >system,
No, DORK BOY. Leave your Zimmerman complex up your ass, where it belongs. I was "under the impression" that the ext series of file systems, as well as MS' NTFS fought such occurrences with a different operation and management paradigm, than the old, FAT type method. It has nothing to do with a goddamned OS, or your bent perceptions of "fan bois" on the internet. Your shaw ain't small, your fucking brainbox is. And no, I did not need a primer on what fragmentation is. YOU seem to need one though, as you seem to think that every file write should have consideration for a file alteration, and that simply is not the case for many, if not most files. Images, videos, database files, historical logs, etc. get incremental writes done on them. Sorry, but those fragmented files are NOT causing your system to run slowly. They ONLY get read WHEN you open them, and even THAT task is trivial, even if it is this hugely fragmented file you are so worried about.
On 2014-11-04, DecadentLinuxUserNumeroUno <DLU1@DecadentLinuxUser.org> wrote:
> > And no, I did not need a primer on what fragmentation is. YOU seem to > need one though, as you seem to think that every file write should have > consideration for a file alteration, and that simply is not the case for > many, if not most files.
I might have considered being insulted if this line of argument wasn't so comical. Name me a single file on your hard drive that was _not_ zero bytes long on creation. Ignore directories or device nodes whose implementation is inherently implentation specific, just a normal non-special file that started life _greater_ than zero bytes long. If you had done _any_ programming you would have known this. I take it you don't have a hard drive full of zero byte files, so how did they grow to their current size? -- Andrew Smallshaw andrews@sdf.lonestar.org
On Tue, 4 Nov 2014 03:35:30 +0000 (UTC), Andrew Smallshaw
<andrews@sdf.lonestar.org> Gave us:

>Name me a single file on your hard drive that was _not_ zero bytes >long on creation. Ignore directories or device nodes whose >implementation is inherently implentation specific, just a normal >non-special file that started life _greater_ than zero bytes long.
Oh boy! A goddamned semantical total retard too, I see. Straw man, much? Damn... now I have to laugh. BRL!!
In comp.arch.embedded Andrew Smallshaw <andrews@sdf.lonestar.org> wrote:

(snip)

> No, on _any_ general purpose file system. It is inevitable. You > seem to be of the impression that Linux is somehow the ultimate > system, I've noticed it in a couple of threads now. It isn't and > it certainly can't do the impossible. The only sure ways to avoid > fragmetnation impose a loss of generality. ISTR at least one of > the mainframe systems required you to specify the maximum size on > file creation so the space could be set aside for it (may have been > MVS but I may be wrong, it's before my time) but that isn't compatible > with the way most software operates that expect files to be growable.
For OS/360, and I believe also MVS, you specify the initial (primary) file size, which must be allocated in four or less extents (fragments). You can also specify a secondary amount, such that the file can grow, in units of the secondary allocation, until 15 extents. If you specify RLSE, then unused tracks will be deallocated on close. If you have large disks, specify a large primary, if possible the system will allocate in one extent, and then RLSE to free the unused space. If you don't RLSE, you can later append (DISP=MOD) to use the rest of the space. The RT-11 file system can only have contiguous files. When you open the first file for writing, it will start at the beginning of the largest available region. If you open a second file, it will either start at the beginning of the second largest, or split the already used region in half. Files are never fragmented, but free space can get fragmented, and there is a process to move files such that the free space is contiguous. -- glen
The 2026 Embedded Online Conference