Reply by David R Brooks January 29, 20072007-01-29
CBFalconer wrote:
[snip]
> l0839 msgcrc > l0851 crcfil > l0872 crclst > l087e last >
Yes, id2id ran just fine. I sure would like to see how DAA was used :)
Reply by maxt...@libero.it January 29, 20072007-01-29
After my post, I had to turn to another problem.
Now, I can work on it again.
I'm going to read all your suggestions: thank you all!
Max

Reply by CBFalconer January 25, 20072007-01-25
David R Brooks wrote:
> CBFalconer wrote: >> David R Brooks wrote: >>
... snip ...
>> >>> Hmm, having disassembled that code, I can't find a DAA instruction >>> in it? Maybe the disassembler is acting up, but it doesn't seem so. >>> As per my other post, that code is at >>> http://members.iinet.com.au/~daveb/buffer/ccitcrc.zip >> >> I can't either. It may be a very early version, before I found the >> high speed code. Too bad the library had been repacked, as I >> normally put date stamps in my LBRs. >> >> I may pass the source through id2id to make things more readable. >> Your disassembler did a nice job, whose is it?. I can spot my >> techniques in its output quite nicely. You might want to try >> id2id-20 also. Available on my download page. >> > That disassembler is one I wrote myself, many years ago. > I could post it, if anyone's interested.
If it is written in a higher level language, so as to be usable on other systems, I would definitely like to see it. I had a little time last night, so I started clearing up the CCITCRC disassembly. The beginning of an IDPAIRS file follows. I simply use "id2id <ccircrc.mac >ccitcrc.chk" and have both ccitcrc.chk and idpairs in editor windows. The editor (textpad) automatically reloads an altered file, so the identify - propagate - read cycle is very quick. You may want to continue it. id2id-20 is available at: <http://cbfalconer.home.att.net/download/> The file so far: i10 fclose l005c fcb_dv l005d fcb_fn l04cc abort l04cf quit l04da Eclsm l04dd tstr l04f6 bdos l04e7 couta l04fa Edir l0520 Efull l0541 Eclose l055d Eabort l056b Enofil l057e Efopen l058e Efread l05a2 msgins l05ad msgrmv l05b6 msgdun l05bb msghlp l07ed msgpws l0839 msgcrc l0851 crcfil l0872 crclst l087e last -- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net>
Reply by Robert Adsett January 24, 20072007-01-24
In article <9shfr2tvl395qbk26h7fi600g2n5uej63a@4ax.com>, Mr. C says...
> On 23 Jan 2007 11:20:56 -0800, "Arlet" <usenet+5@c-scape.nl> wrote: > > >*Any* 16-bit checksum will catch 99.998% (65535/65536) of random, > >uncorrelated errors , so you might as well pick one that's easy to > >calculate, such as simply summing/XOR'ing 16-bit words. This has the > >added advantage that you can quickly update your checksum when only > >changing one variable. Simple summation will also catch many of the > >burst errors (it's easy to see that all bursts within a single 16-bit > >memory word are caught). > > OK, I think I see your point now. For random errors that may occur > "far apart" in memory, making the burst length very long, even a > checksum will be as good as a CRC-16. > > OK, but ... suppose I have a 16K memory and I swap data in the first > and last locations (and lets say they are not equal since that woud be > boring). Now I have a very long burst length of errors since the > first and last locations are wrong. Using a simple checksum will not > catch the error since all bytes are summed - there is no consequence > as to their location.
And you also be able to find in that raw set of all the possibilities that could fit into the space two sets that will pass a CRC check but fail a checksum. Which one is more likely depends on the error distribution compared to the check distribution. Consider a 2 bit check. That gives you 4 possible check values and a 1/4 chance of any random set of bits producing a given check value. Now consider two ways of producing those check bits 1 - check bit one is the parity bit of all the even bits check bit two is the parity bit of all the odd bits 2 - the check bits are the simple check sum of all the bit pairs. They both will produce the same possibility of matching a random set of bits and it's easy enough to come up with patterns that will pass one but fail the other. Similarly since a simple 16 bit checksum a 16 bit CRC both have a 1/64K chance of a random set of bits matching. If checksum misses some errors thata CRC catches it follows the reverse will also be true. If it weren't one of the two would have fewer unique check values. The only real way of choosing is if you can show the pattern of expected errors is better detected with once versus the other. Since a CRC is better at detecting short bursts of errors and comms are often characterized by such error source that makes a good match. Cryptographic hashes are good choice when the goal is to make it especially difficult to produce a second set of bits that will produce the same check value. To determine which one is actually better for this application you would first need to determine what the failure patterns are likely to be. If failures are random then even a simple check will work. I'v not seen any investigation into failure modes of battery backed memory so I don't know which might work better. I do know that in one case I dealt with one of the failure modes damaged both the active and backup copies of configuration data in an EE so it may not make sense to assume that failures are restricted in area. That is, however, just a single data point. Robert -- Posted via a free Usenet account from http://www.teranews.com
Reply by Mr. C January 24, 20072007-01-24
On 23 Jan 2007 11:20:56 -0800, "Arlet" <usenet+5@c-scape.nl> wrote:

>*Any* 16-bit checksum will catch 99.998% (65535/65536) of random, >uncorrelated errors , so you might as well pick one that's easy to >calculate, such as simply summing/XOR'ing 16-bit words. This has the >added advantage that you can quickly update your checksum when only >changing one variable. Simple summation will also catch many of the >burst errors (it's easy to see that all bursts within a single 16-bit >memory word are caught).
OK, I think I see your point now. For random errors that may occur "far apart" in memory, making the burst length very long, even a checksum will be as good as a CRC-16. OK, but ... suppose I have a 16K memory and I swap data in the first and last locations (and lets say they are not equal since that woud be boring). Now I have a very long burst length of errors since the first and last locations are wrong. Using a simple checksum will not catch the error since all bytes are summed - there is no consequence as to their location.
>Depends on the size of the areas, and the chance that there's any >correlation. For example, if you have a 32KB memory chip, divided into >2x 16KB memory areas, and the MSB address line to your chip happens to >be broken, the two areas will always look the same.
I guess my brain was in the mode of serial and internal memories where address and data lines are not an issue.
Reply by Francois Grieu January 24, 20072007-01-24
CBFalconer <cbfalconer@yahoo.com> wrote:

> Francois Grieu wrote: > > Are you sure that the DAA instruction can help computing the 16-bit > > CRC CCITT? And if yes, how the hell? The more I think about it, > > the more I am puzzled, and becoming a tad incredulous. > > Yes. I forget the details, but it involved the auxiliary carry > flag on the 8080 and z80.
Could the idea be: the auxilliary carry bit is a fast way to keep a temporary copy of bit 3 (compare to 8 and the complement of bit 3 goes into H). Later, to test H, and since there is no JR H,nn instruction, use DAA to bring H into C or Z, and branch according to this. So DAA would not be used for its effect on an 8-bit data chuck, but "only" as a way to access the H bit. Francois Grieu
Reply by CBFalconer January 23, 20072007-01-23
Francois Grieu wrote:
> CBFalconer <cbfalconer@yahoo.com> wrote: >> David R Brooks wrote: >> >>> having disassembled that code, I can't find a DAA instruction in it? >>> Maybe the disassembler is acting up, but it doesn't seem so. >>> That disassembled code is at >>> >>> http://members.iinet.com.au/~daveb/buffer/ccitcrc.zip >> >> I can't either. It may be a very early version, before I found the >> high speed code. > > Are you sure that the DAA instruction can help computing the 16-bit > CRC CCITT? And if yes, how the hell? The more I think about it, > the more I am puzzled, and becoming a tad incredulous.
Yes. I forget the details, but it involved the auxiliary carry flag on the 8080 and z80. -- <http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt> "A man who is right every time is not likely to do very much." -- Francis Crick, co-discover of DNA "There is nothing more amazing than stupidity in action." -- Thomas Matthews
Reply by Arlet January 23, 20072007-01-23
Mr. C wrote:
> >In the case of memories, I'd expect burst errors that invert groups of > >bits to be rare. Instead, I would expect blocks of memory to be zeroed, > >overwritten with the wrong data/garbage, or randomized by power loss. > >CRC codes aren't especially suited for any of these cases. > > Then what would you recommend? As David Empson mentions in this > thread, the CRC-16 will catch 99.998% of situations where there are > scattered errors (i.e. long bursts), worst case. I would consider > that to be "good enough" for me.
*Any* 16-bit checksum will catch 99.998% (65535/65536) of random, uncorrelated errors , so you might as well pick one that's easy to calculate, such as simply summing/XOR'ing 16-bit words. This has the added advantage that you can quickly update your checksum when only changing one variable. Simple summation will also catch many of the burst errors (it's easy to see that all bursts within a single 16-bit memory word are caught).
> Consider duplicate storage of data, say in 2 places. Upon power-up > the two areas could be compared for equality. If they are not exactly > the same, there is an error somewhere. So if there were errors, the > only way the errors could not be detected is if they appeared > identically in BOTH memory areas. I wonder what the probability of > that would be? Any thoughts on that?
Depends on the size of the areas, and the chance that there's any correlation. For example, if you have a 32KB memory chip, divided into 2x 16KB memory areas, and the MSB address line to your chip happens to be broken, the two areas will always look the same.
Reply by Francois Grieu January 23, 20072007-01-23
 CBFalconer <cbfalconer@yahoo.com> wrote:

> David R Brooks wrote: >> having disassembled that code, I can't find a DAA instruction in it? >> Maybe the disassembler is acting up, but it doesn't seem so. >> That disassembled code is at
http://members.iinet.com.au/~daveb/buffer/ccitcrc.zip
> I can't either. It may be a very early version, before I found the > high speed code.
Are you sure that the DAA instruction can help computing the 16-bit CRC CCITT? And if yes, how the hell? The more I think about it, the more I am puzzled, and becoming a tad incredulous. Francois Grieu
Reply by Mr. C January 23, 20072007-01-23
>In the case of memories, I'd expect burst errors that invert groups of >bits to be rare. Instead, I would expect blocks of memory to be zeroed, >overwritten with the wrong data/garbage, or randomized by power loss. >CRC codes aren't especially suited for any of these cases.
Then what would you recommend? As David Empson mentions in this thread, the CRC-16 will catch 99.998% of situations where there are scattered errors (i.e. long bursts), worst case. I would consider that to be "good enough" for me. Consider duplicate storage of data, say in 2 places. Upon power-up the two areas could be compared for equality. If they are not exactly the same, there is an error somewhere. So if there were errors, the only way the errors could not be detected is if they appeared identically in BOTH memory areas. I wonder what the probability of that would be? Any thoughts on that?