EmbeddedRelated.com
Forums

books for embedded software development

Started by Alessandro Basili December 12, 2011
Hi Rob,

On 12/16/2011 5:55 PM, Rob Gaddi wrote:
> On Fri, 16 Dec 2011 15:45:58 -0700, Don Y wrote: > >> Where signedness is a real issue is when you try to use chars as "really >> small ints" -- short shorts! There, it is best to define a type to >> make this usage more visible (e.g., "small_counter") and, in that >> typedef, you can make the signedness explicit. > > Isn't this why God and ISO have given us stdint.h? So that you can > distinguish between an int8_t, a uint8_t, and a char?
I don't like exposing basic types for "special needs". If, for example, I have "user identifiers", I'd rather have a uid_t that I can map onto <whatever> rather than deciding uid's "will" fit in a uint8 -- only to discover, later, that I *really* need them to be unsigned shorts (then I have to look at each thing that could be a uid and change its declaration to be "unsigned short"). If, instead, I treat them as "uid_t"s, I just change a typedef. (I wish C was more strongly typed)
On Fri, 16 Dec 2011 21:24:08 -0600, Les Cargill
<lcargill99@comcast.com> wrote:

>Hans-Bernhard Br&#4294967295;ker wrote: >> On 16.12.2011 22:57, Don Y wrote: >> >>> Note that some implementations fail to clear that memory on startup! >> >> Such implementations would have to be classified as spectacularly >> broken. You would need a pretty strong excuse for using any such >> toolchain despite such failures. >> >>> I.e., "0x27" is just as likely to be an uninitialized value. >> >> No. It's possible, but nowhere near as likely. >> >> You can make some implementations not intialize the .bss region (or >> meddle with the startup to that end), but the ones where that happens by >> accident are certainly _far_ outnumbered by the standard-conforming ones. > >There is no such standard. It's a nicety of desktop dev. tools.
It's a requirement of all C implementations, hosted or freestanding: From the C99 standard: "5.1.2 Execution environments (1) Two execution environments are defined: freestanding and hosted. In both cases, program startup occurs when a designated C function is called by the execution environment. All objects with static storage duration shall be initialized (set to their initial values) before program startup. The manner and timing of such initialization are otherwise unspecified. (...)" "6.7.8 Initialization (...) (10) If an object that has automatic storage duration is not initialized explicitly, its value is indeterminate. If an object that has static storage duration is not initialized explicitly, then: - if it has pointer type, it is initialized to a null pointer; - if it has arithmetic type, it is initialized to (positive or unsigned) zero; - if it is an aggregate, every member is initialized (recursively) according to these rules; - if it is a union, the first named member is initialized (recursively) according to these rules. (...)" Similar (actually slightly clearer) language exists in the C89 standard, but this was easier to quote. There is no wiggle room to allow a confirming implementation to avoid initializing what is commonly called BSS storage (aka static objects without explicit initializers). On many, probably most, implementations a simple clear of the area produces the required result. That there have been buggy implementations, or that some implementations allow you to modify the CRT startup code to avoid clearing BSS storage, is not in question, but in those cases you no longer have a "real" C environment.
On Fri, 16 Dec 2011 20:55:45 -0600, Les Cargill
<lcargill99@comcast.com> wrote:

>Simon Clubley wrote: >> On 2011-12-16, Don Y<not.to.be@seen.com> wrote: >>> >>> Using unsigned's for counts (can you have a negative number >>> of items?). Using relative measurements instead of absolutes >>> (e.g., "worksurface is 23 inches from reference; position of >>> actuator is 3.2 inches from edge of worksurface" contrast with >>> "worksurface is at 23.0, actuator is at 22.5 -- oops!") >>> >> >> I strongly agree about the unsigned int issue. _Every_ integer I >> declare in C is unsigned unless I actually need a signed integer. >> >> I find that the number of unsigned integers in my code is _vastly_ >> greater overall than the number of signed integers. >> >> Personally, I think C should have made unsigned integers the default. >> > >There is usually an option/pragma to make unsigned the default.
Which unfortunately often breaks many standard libraries and whatnot.
On 12/17/2011 08:01 AM, Don Y wrote:

>> Isn't this why God and ISO have given us stdint.h? So that you can >> distinguish between an int8_t, a uint8_t, and a char? > > I don't like exposing basic types for "special needs". > If, for example, I have "user identifiers", I'd rather > have a uid_t that I can map onto <whatever> rather than > deciding uid's "will" fit in a uint8 -- only to discover, > later, that I *really* need them to be unsigned shorts > (then I have to look at each thing that could be a uid > and change its declaration to be "unsigned short"). > > If, instead, I treat them as "uid_t"s, I just change a > typedef.
It still leaves a problem when you manipulate values. What is the resulting type of '2 * x' when x is a uid_t ? And what formatting do you use to printf() a uid_t ? Or how do you know that 'distance = speed * time' makes sense, given 3 abstract user-defined types ?
Hi Alessandro,

> Understood. Well, what is not actually clear to me was the choice of the > watchdog in the first place. Our systems have all what we call "loader" > which is a very primitive, i.e. simple, i.e. hopefully reliable, program > that automatically boots on reset. Once this loader is running the > system is waiting an external telecommand to load the "main" program. > On top of it all the units have a hardware decoded reset telecommand, > i.e. we can remotely reset the unit anytime is needed.
With 100% certainty? Can the instrument "do anything" (dangerous, costly, etc.) if it is "insane" for an indeterminate length of time? (run down the battery, cause a course change, crash into the Sun, etc.) If so, the watchdog offers he potential for reducing this "period of vulnerability".
> In this scheme I actually fail to understand the reason for a watchdog. > Assume the system doesn't have a watchdog and it hangs. At this point we > have to send an external command to reset the system and then (after the > small time required to load the loader) load the main application. The > watchdog only saves an additional external reset command, it does not > restore full functionality but just partial.
Understood. But how quickly after "losing its mind" can you detect that fact? I.e., do you (staff) make a conscious decision that it needs to be reset? Posibly after trying to command it to do certan things and then wondeing why it hasn't complied? Or, is that done mechanically/autoatically (*quickly*)?
>> One way to avoid that is to have the main loop set a permissive flag and >> the timer interrupt test for and reset the flag and then kick the dog. > >> Of course, if the bit of code in the main loop that sets the flag is >> included in the stuck endless loop, one still ends up with a broken >> system. A defense against that is to use multiple flags or, perhaps, a >> multi-valued flag: set to 1 at the top of the main loop; somewhere >> inside a must-run portion, if flag is 1 then flag is 2; possibly >> additional if/then levels; and finally have a periodic interrupt test >> for the terminal value. Kick the dog only if all intermediate steps have >> occurred. > > I actually like the idea of having it the other way around: > > - in the ISR set the flag that the dog needs to be kicked. > - in the main loop check the flag and kick the dog accordingly.
Why would the main loop need to check the flag? Just stroke the watchdog in the idle() loop. Rich's point about setting the flag "in the background" is to ensure that the ISR doesn't unilaterally stroke the watchdog. The flag ensures that the "background" is running. (i.e., if something goes wacky, it will hopefully prevent the flag from being set in a timely manner. The complexity of your watchdog has to take into consideration how the application is *likely* to fail and the consequences of that failure. E.g., in "hostile" environments (i.e., where there are deliberate attempts to subvert your device) it can make sense to add mechanisms that detect those attempts and deliberately shut down the system (assuming this is the "safe" state)
> This should reduce the complexity of the ISR, which is only asserting a > flag, and keeps the watchdog functionality in place.
Hi Arlet,

On 12/17/2011 12:40 AM, Arlet Ottens wrote:
> On 12/17/2011 08:01 AM, Don Y wrote: > >>> Isn't this why God and ISO have given us stdint.h? So that you can >>> distinguish between an int8_t, a uint8_t, and a char? >> >> I don't like exposing basic types for "special needs". >> If, for example, I have "user identifiers", I'd rather >> have a uid_t that I can map onto <whatever> rather than >> deciding uid's "will" fit in a uint8 -- only to discover, >> later, that I *really* need them to be unsigned shorts >> (then I have to look at each thing that could be a uid >> and change its declaration to be "unsigned short"). >> >> If, instead, I treat them as "uid_t"s, I just change a >> typedef. > > It still leaves a problem when you manipulate values.
Yes!
> What is the resulting type of '2 * x' when x is a uid_t ?
What is the result of 2 * x when x is a __________? I.e., if X is a unint8 but has the current value of 225? With any type, you have to look at how the values are used in order to ensure the type of the result is appropriate to represent the value. E.g., my arbitrary precision decimal math library configures itself to exploit the largest integer types available in the target environment. It does this by looking at the sizes of the "standard types" and determining what range of decades can fit into those types. But, also ensures there is a *larger* type that can be use to represent things like MAX+MAX+Carry (as the carryout logic relies on this).
> And what formatting do you > use to printf() a uid_t ? > > Or how do you know that 'distance = speed * time' makes sense, given 3 > abstract user-defined types ?
Because you know what those types resolve to and reflect the uses to which you apply those types back to the type definition process. E.g., I have a driver for a touchpad that implements fractional fixed point math (10b integer, 6 bit fraction). I have to be keenly aware of this representation whenever I apply arithmetic operations between these data types (C++ would have been nicer syntax as I could redefine the arithmetic operators instead of having to bend the code accordingly).
On 12/17/2011 09:11 AM, Don Y wrote:

>>> I don't like exposing basic types for "special needs". >>> If, for example, I have "user identifiers", I'd rather >>> have a uid_t that I can map onto <whatever> rather than >>> deciding uid's "will" fit in a uint8 -- only to discover, >>> later, that I *really* need them to be unsigned shorts >>> (then I have to look at each thing that could be a uid >>> and change its declaration to be "unsigned short"). >>> >>> If, instead, I treat them as "uid_t"s, I just change a >>> typedef. >> >> It still leaves a problem when you manipulate values. > > Yes! > >> What is the resulting type of '2 * x' when x is a uid_t ? > > What is the result of 2 * x when x is a __________? > I.e., if X is a unint8 but has the current value of > 225? > > With any type, you have to look at how the values are used > in order to ensure the type of the result is appropriate > to represent the value.
Which means that you have to be aware of the underlying type when you look at the code, which kind of defeats the purpose of having user-defined types in the first place. When x=225, and represented as an 'int', I know that 2*x = 450. If it's defined as 'uint8_t', I know it will overflow. When it's a 'uid_t', I have no idea. It also means you can't just change the 'typedef' without carefully checking the code to see if nothing breaks.
Don Y wrote:

[%X--- Some stuff about Watchdogs --- ]

> Rich's point about setting the flag "in the background" is to > ensure that the ISR doesn't unilaterally stroke the watchdog. > The flag ensures that the "background" is running. (i.e., > if something goes wacky, it will hopefully prevent the > flag from being set in a timely manner. > > The complexity of your watchdog has to take into consideration > how the application is *likely* to fail and the consequences > of that failure. E.g., in "hostile" environments (i.e., > where there are deliberate attempts to subvert your device) > it can make sense to add mechanisms that detect those > attempts and deliberately shut down the system (assuming > this is the "safe" state)
[%X] Phil Koopman's and Jack Ganssle's books both have sections on proper considerations of watchdogs. Preference is for the kick to be in a part of the code that will only be reached if everything is running properly. Never, ever in a Timer ISR. I like to hang a Pulse Maintained Relay onto one digital output. This output gets to change state only when all the sanity checks are completed in the back end of the idle task. If there are detectable errors (CRC failures or Memory Test Failures) Interrupt hogs or hanging loops, the relay will de- energise. If there is a single component fault within the Pulse Maintained Relay circuit the relay is de-energised. One of the relay contacts can be used to disable output power, another could kick-off the reset function. -- ******************************************************************** Paul E. Bennett...............<email://Paul_E.Bennett@topmail.co.uk> Forth based HIDECS Consultancy Mob: +44 (0)7811-639972 Tel: +44 (0)1235-510979 Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. ********************************************************************
On 12/16/2011 09:02 AM, Alessandro Basili wrote:
> On 12/14/2011 11:30 PM, Steve B wrote: >> >> Interesting. This is off the topic of the thread, but I think a star >> tracker will be quite difficult to get tuned and working after the fact. >> Not impossible, but having the optical and mechanical calibration and >> integration done right would be essential. >> So I bet it would make for a very interesting and challenging task. >> > > Well, until the software is not ready to reliably take images it would > be hard to do anything you mentioned. I don't quite understand what do > you mean by mechanical calibration. >
I was just thinking of the focal distance from the lens (or mirrors?) to the CCD, the point where the optical axis meets the CCD, and radial distortion. Also, if the focus isn't set well, you won't get good star images. I think it's pretty hard to automatically analyze star fields without knowing the above things, but since you are downloading the data and analyzing it offline, you might do OK. Good luck to you! Steve
Hi Arlet,

On 12/17/2011 1:43 AM, Arlet Ottens wrote:

>>>> If, instead, I treat them as "uid_t"s, I just change a >>>> typedef. >>> >>> It still leaves a problem when you manipulate values. >> >> Yes! >> >>> What is the resulting type of '2 * x' when x is a uid_t ? >> >> What is the result of 2 * x when x is a __________? >> I.e., if X is a unint8 but has the current value of >> 225? >> >> With any type, you have to look at how the values are used >> in order to ensure the type of the result is appropriate >> to represent the value. > > Which means that you have to be aware of the underlying type when you > look at the code, which kind of defeats the purpose of having > user-defined types in the first place.
You always have to be aware of what a type's capabilities are. Even standard types. You can't perform arbitrary operations on arbitrary types and hope the results are representable in that type. Why have enums? Why not just use special constants? Why have ints instead of just using floats everywhere?
> When x=225, and represented as an 'int', I know that 2*x = 450. If it's > defined as 'uint8_t', I know it will overflow. When it's a 'uid_t', I > have no idea.
But that's only because you know what a uint8_t is. If you knew what a uid_t was, you'd have the same confidence in your answer as with that uint8. Do you think there are no bugs related to overflow with longs? :> Do you *consciously* think about the resuts of every computation that you make to ensure the result, in ALL possible cases, fits? How often do you *reduce* types to get a "tighter fit" to the data? I.e., chances are, you use types that are "MORE than adequate" for the data you are representing.
> It also means you can't just change the 'typedef' without carefully > checking the code to see if nothing breaks.
You have to check your code when you replace a long with a short. The same applies. C++ makes these sorts of things a lot nicer, syntactically. But, that makes costs of implementing the types *safely* too costly. The same issue has been around forever. E.g., I've implemented 12b data (packed two per three bytes), 24b math libraries, 2048b math libraries (!) -- even a bit-wide memory subsystem, etc.