EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Software Reuse In Embedded code

Started by steve June 15, 2011
Hi Steve,

On 6/15/2011 7:44 PM, steve wrote:
> On Jun 15, 5:41 pm, Don Y<nowh...@here.com> wrote: >> Hi Roberto (and Paul), >>>>> What percentage would you say your current project consists of >>>>> software reuseable items? >> >>>>> By software reusable items I mean something that you acquired >>>>> (purchased, freeware, shareware, licensed etc) which you had no >>>>> responsibility in the development, construction, or testing. You >>>>> reused them because the documentation was sufficient to convince you >>>>> of its quality. >> >>>> Many of us would have problems recognising that as a definition of re-use. >>>> Your definition is more like COTS* usage to me. >> >>> Agree. >> >> While I understand (both) your point, I can see the OP's intent, >> here. How do you differentiate a third party COTS product from >> a product that the guy who sat in your cubicle *last* year wrote!? >> >> A necessary prerequisite for reuse is a complete "understanding" >> (documentation) of what that code does and is intended to do and >> what it *won't* do. And, in which environments those claims >> apply. > > But why? We don't require this strict "complete understanding" for HW > components. Some HW datasheets are wrong, components don't work as > stated, they are returned for failure analysis etc etc. But we deal > with it. We build systems that work because we test at a high level. > We still build reliable systems with all these headaches.
And there is a *cost* associated with each "deviation" we encounter between the *expected* (and/or *documented*) behavior of those "chips" and their *actual* behavior. I suspect anyone who's had a development schedule ruined because of a vendor's screwup is *really* hesitant to climb back into bed with that same vendor!
> For SW, the attitude (from the responses here and my personal > experience) is it has to be guaranteed to work 100% or otherwise I'll > just code it myself. And this attitude exists for 3rd party SW as well > as for 1 year old software done in house by the guy who sat in the > cube next to you, like you mentioned.
There's a big difference between hardware and software. First, most folks *can't* grow their own silicon. The bar is too high to acquire that set of skills/tools/personnel. In terms of *software*, it's HARDER to get a license to sell real estate (which is one of those "professions" that is "open to everyone, regardless of skills"... kinda like "used car salesman") than it is to call yourself a "programmer". I.e., for as little as $20 you can get a 2 year old PC, a friend's old copy of Windows N-2, a "free" compiler and a "Learn C in 24 Hours" book. Now, you're a "programmer". Be honest, how many of your colleagues are formally qualified to be writing code? How many just "picked it up" without any real "education"? (at school we used to laugh at all the physics majors who ended up writing code for a living... never having taken any of the associated courseware -- but, needing a *paycheck* from *something*...) [granted, you still have to get someone to *hire* you... or, maybe just WRITE SHAREWARE and hope someone incorporates it into their product and sends you a royalty check?? :> ) Second, there are lots of companies providing reliable hardware. If company A screws you, you limp through the project and then abandon company A on your next design! 30-40 years ago, it was a "necessary prerequisite" to announce multiple sources when you brought a new processor to the market. People *didn't* want to be at the mercy of a sole source supplier (pricing, availability, foundry problems, etc.). How many vendors of "reliable, ALL-PURPOSE software components" can you name? Third, the flip side of the "low entry cost" argument applies. While its easy for John Q Public to call himself a programmer, it's also a lot easier for Joe Professional to *write* a piece of reliable code, "in house". You're not at the mercy of some vendor to provide you with a solution (or, an *alternative*, but WORKING, solution) for your "software componentry". Fourth, it's a LOT harder to test software than hardware. The fact that so many software bugs get *through* testing attests to this. Hardware can be "put through its paces" in a lot more controlled way (e.g., by the vendor). There's a lot less "state" that affects the performance of the hardware. Fifth, its a lot easier to come up with (an accepted) "general purpose" hardware solution/(sub)system than a similarly "general purpose" software solution. E.g., I was burning a DVD-ROM in Nero earlier today with files named: 01. foo 02. bar 03. baz ... 100. fini In the right "file explorer" pane, these files were listed in *numerical* order. In the *left* "DVD content" pane, they were listed using a L-R alpha sort -- so, 100 appeared after 10 instead of after 99. IN THE SAME APPLICATION. Sort() is sort() is sort(), right? Obviously, there were two sorts in use and neither agreed with the other. Which is the *right* "sort" for you? Does your 3rd party library have a different notion? I've seen file sizes reported in a Windows Explorer window and an IE window that used different notions of how to round. "Hmmm... is this 17KB file really the same as this *other* 18KB file that has the same name and timestamp?" Will the GUI you use for the first part of your application use the same conventions as the GUI you use in some *other* part? Will you ever *notice* the discrepancies? Sixth, we (psychologically) more readily acccept/embrace the *requirements* that a particular hardware solution imposes. "It needs a 16b data bus". "It only works with NAND flash." etc. But, for software solutions, we think we can magically *kludge* something that will "adapt" what we have to what we *need* instead of *fixing* (rewriting) it: "we'll take the existing memory/buffer management software designed for a 64KB memory space and *extend* it to handle our 16MB memory space by creating lots of 64K *pools* (each managed with the old management software) and we'll just add some glue logic on top to keep track of which *pool* the buffer came from!" Would you use a *real* printf (OR ANY LIBRARY THAT RELIED ON THE PRESENCE OF SAME) in a small PIC deployment? You'd probably be miffed to discover the printf was being used *only* in: for (finger = 0; finger < 10; finger++) { printf("This is finger #%d\n", finger); } Seventh, *who* assumes responsibility for testing (and fixing!) the third party software? How keen are *you* to do that for somebody else's code? Will there be any unspoken pressure on *where* the fix gets made -- i.e., in the third party code or in an "adapter" that you develop in *your* code? When it comes to hardware, the folks involved in testing it are usually clearly identified. And, the range of solutions they have at their disposal is easily quantified: can the board be patched or redesigned? Or, is there something fundamentally flawed in the implementation that needs a complete rethink? (wanna bet that this results in major code rewrites when the problem is a software one... instead of abandoning the "component")
> There is a process to deal with defective COTS SW like defective HW. > >>>> If you really want to talk about re-use then perhaps you can look to a >>>> better definition first and then ask the question again. >> >>>> By my notion of re-use I would put the percentage in my projects at >>>> somewhere between 30% and 60% depending on the project being undertaken. >> >>>> *Commercial Off-the-shelf >> >>> Same for me - In my current project I would say 70% of the code is >>> reused. Most of it is in common libraries developed in-house for >>> other projects in the same product line. >> >> I think that is a ^^^^^^^^^^^^^^^^^^^^^^^ key factor. If your >> projects/products are similar, then you can benefit from reusing >> code written (designed) to solve a "very similar problem" in >> a sister product. Because the needs *tend* to be the same and the >> *environment* tends to be the same. > > Yes exactly, that is the typical "reuse" I normally encounter. > > Only occurs if very similar projects, same product line or small > enhancements to a product. I don't call that reuseable code. I mean, > what else would you do if you have to enhance a product line, throw > everything out and start over? Any code, no matter how badly it's > written can be used in situations like that. Call it what it really > is, a point (custom) solution modified to be another point solution > for a similar product. > > Reuseable code is code that can be used in a completely different > product as is.
Both are examples of "reusable". It's just that it is typically easier to "fit" code from that existing solution to the "new" problem -- the problems are similar, the platforms (tend to be) similar, the design constraints similar, the personnel similar, etc. You wouldn't, for example, want to take the memory allocator out of my "network speaker" and use it in a generic application. It's a bad fit. OTOH, a generic memory allocator would give abysmal runtime performance in my application. IME, you have two fundamental "problems" that poke their head in your way when it comes to reuse: - the guy (boss?) who thinks the problem can be greatly simplified by reusing existing code (while being clueless as to the actual details involved) - the guy (gung-ho programmer?) who fails to see *any* similarity with existing solutions (NIH) and believes that only he/she can "save the day". My approach is to reuse *designs* (which *could* result in lots of copy/paste from existing *codebases*) but fit them to the specific needs of the application. I have no desire to write yet another "sort" from scratch. I've long since forgotten the formal names for each of the various sorting techniques (bubble, shell, insert, etc.). *But*, I will know (remember) that some other project had data organized in a manner similar to "this one". And, I'll go see which sorting algorithm was used, there. And, tweek it to fit the needs of *this* application. I.e., the "engineering" gets reused and I just have to do some "tidying up" to make it work right in this new use. "Big companies" that can afford to "specialize" particular staff can benefit from this sort of approach by having "experts" in each "application sub-domain". E.g., an OS guy, a math guy, an I/O guy, a UI guy, etc. These people (resources) can accumulate knowledge as to the costs and benefits of the various approaches that they have used over the years (or, had to *maintain* on behalf of the company). So, they can be called upon to offer advice as to appropriate solutions (and the costs/rewards thereof) to staff making implementation decisions.
On 16/06/2011 03:50, steve wrote:
> On Jun 15, 11:20 am, David Brown<da...@westcontrol.removethisbit.com> > wrote: >> On 15/06/2011 16:03, steve wrote: >> >>> What percentage would you say your current project consists of >>> software reuseable items? >> >>> By software reusable items I mean something that you acquired >>> (purchased, freeware, shareware, licensed etc) which you had no >>> responsibility in the development, construction, or testing. You >>> reused them because the documentation was sufficient to convince you >>> of its quality. >> >> 0 percent. >> >> If you use software from a third party, then /you/ are responsible for >> testing and otherwise qualifying it for use in /your/ system. >> >> Getting the software parts from somewhere you consider reliable will >> certainly /reduce/ the level of testing you need to do to be happy with >> its quality, but it does not eliminate it. >> >> If you buy your chips from a reliable supplier, and they come with good >> datasheets qualifying temperature ranges, power requirements, etc., does >> that mean you don't have to test your boards when you use them? It >> might mean you don't have to test over a wide temperature or voltage >> range, but you still have to test the boards. >> >> The same thing applies to software components. > > But to qualify it you have to test the boards, not the chip. There is > a difference between system validation and component testing, when you > buy a chip where you involved in the gate level testing? No. Do you > perform gate level testing of the chip when you get it, no. You treat > the chip as a black box, completely unaware of it's development cycle, > layout history, or testing. You make no attempt to repeat the > suppliers chip level testing and generally don't have any desire to > know what they did (and is generally intellectual property so you > couldn't find out if you wanted to). > > What you do is you plug it in your system and verify your system meets > customer specs. Your job is to verify it performs while interacting > with other components in the system. That is your testing > responsibility. > > Same would be true with reuseable code, you test the system, not the > inners of the reuseable code. You qualify it the same way without the > line by line testing. So your argument isn't valid.
I get your point. But in either case, whether it be chips or software components, you do testing according to how reliable the source is, and how you want to use them compared to the vendor's testing. With chips, you usually just need to do board-level testing - make sure the chips are working together with everything else. But sometimes you do more direct testing - it's not uncommon to include a ram test to check all the bits in a ram chip, for example. And I've used chips that we did more temperature testing on (ASICs from a small company), and for radio communication chips it's often useful to do extra testing if you are stretching them to their limits. For software "components", you can be fairly sure you are using them in a different way than the vendor's testing - so you need more testing. Hopefully you don't need to go as deep as line-by-line testing, but that happens too. Ultimately, if /you/ make a board or a program, it's /your/ job to make sure you test it appropriately, to a level that makes sense for the application.
On 16/06/2011 02:57, Don Y wrote:
> Hi David, > > On 6/15/2011 5:03 PM, David Brown wrote: > >> /All/ versions of Code Composter (can I borrow that name?) have a >> serious design flaw - uninitialised statically allocated data, which >> should be initialised to 0 according to the C standards, is left >> uninitialised. That caused me a lot of pain before I discovered it was >> the CCS compiler that was flawed, not my code. > > Huh? Can't you fix that in crt0.s?
You can add a ram-clear loop to a hook in the startup sequence. And you can also work around it by explicitly initialising data to 0. But my point is that CCS has a clear design flaw which is in direct contradiction to the C standards, and which would be trivial to fix. It is even documented in their manuals that they know this behaviour is non-standard. The excuse, apparently, is that clearing bss can take so long on some devices (if you have enough data, of course) that you might get a watchdog timeout. And they don't want to disable the watchdog on startup. So instead of having standard behaviour as the default and offering an option to avoid clearing the bss, or doing something useful like adding a "noinitialise" pragma or attribute that you can use on big arrays, they decided to produce a broken non-standard compiler and let users find this "surprise" by trial and error. To be fair, they don't claim to be compatible or conform to any particular C standard.
Hi David,

On 6/16/2011 12:14 AM, David Brown wrote:
> On 16/06/2011 02:57, Don Y wrote: >> On 6/15/2011 5:03 PM, David Brown wrote: >> >>> /All/ versions of Code Composter (can I borrow that name?) have a >>> serious design flaw - uninitialised statically allocated data, which >>> should be initialised to 0 according to the C standards, is left >>> uninitialised. That caused me a lot of pain before I discovered it was >>> the CCS compiler that was flawed, not my code. >> >> Huh? Can't you fix that in crt0.s? > > You can add a ram-clear loop to a hook in the startup sequence. And you > can also work around it by explicitly initialising data to 0. But my > point is that CCS has a clear design flaw which is in direct > contradiction to the C standards, and which would be trivial to fix. It > is even documented in their manuals that they know this behaviour is > non-standard.
Oh, so it's not like they just shipped a crt0.s that was "buggy". Their actions are deliberate...
> The excuse, apparently, is that clearing bss can take so long on some > devices (if you have enough data, of course) that you might get a > watchdog timeout. And they don't want to disable the watchdog on
So? They can't clear the watchdog *in* the BSS_init() loop?
> startup. So instead of having standard behaviour as the default and > offering an option to avoid clearing the bss, or doing something useful > like adding a "noinitialise" pragma or attribute that you can use on big > arrays, they decided to produce a broken non-standard compiler and let > users find this "surprise" by trial and error.
Sounds like they are going to a lot of trouble to "rationalize" a bad decision.
> To be fair, they don't claim to be compatible or conform to any > particular C standard.
Ah, that's even better! (not)
On 06/16/2011 12:14 AM, David Brown wrote:
> On 16/06/2011 02:57, Don Y wrote: >> Hi David, >> >> On 6/15/2011 5:03 PM, David Brown wrote: >> >>> /All/ versions of Code Composter (can I borrow that name?) have a >>> serious design flaw - uninitialised statically allocated data, which >>> should be initialised to 0 according to the C standards, is left >>> uninitialised. That caused me a lot of pain before I discovered it was >>> the CCS compiler that was flawed, not my code. >> >> Huh? Can't you fix that in crt0.s? > > You can add a ram-clear loop to a hook in the startup sequence. And you > can also work around it by explicitly initialising data to 0. But my > point is that CCS has a clear design flaw which is in direct > contradiction to the C standards, and which would be trivial to fix. It > is even documented in their manuals that they know this behaviour is > non-standard.
You misspeak: that is one of many clear design flaws. 'double' is 32 bit, as well, which wreaks havoc if you happen to have your own reusable library code lying around that actually needs 'double' to conform to the ANSI standard. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Do you need to implement control loops in software? "Applied Control Theory for Embedded Systems" was written for you. See details at http://www.wescottdesign.com/actfes/actfes.html
Don Y wrote:

[%X]

>>> Many of us would have problems recognising that as a definition of >>> re-use. Your definition is more like COTS* usage to me. >> >> Agree. > > While I understand (both) your point, I can see the OP's intent, > here. How do you differentiate a third party COTS product from > a product that the guy who sat in your cubicle *last* year wrote!?
For my projects, the difference between a COTS product, supplied with just the basic documentation (descriptive specification, user guide and perhaps a trouble-shooting guide) and the in-house re-usable modules is vast. Our in-house developed modules are available with full source code, original design documentation, test patterns and certification. Additionally, the review notes, change documentation and full version history will be a matter of record.
> A necessary prerequisite for reuse is a complete "understanding" > (documentation) of what that code does and is intended to do and > what it *won't* do. And, in which environments those claims > apply.
A sentiment with which I am in complete agreement.
>>> If you really want to talk about re-use then perhaps you can look to a >>> better definition first and then ask the question again. >>> >>> By my notion of re-use I would put the percentage in my projects at >>> somewhere between 30% and 60% depending on the project being undertaken. >>> >>> *Commercial Off-the-shelf >> >> Same for me - In my current project I would say 70% of the code is >> reused. Most of it is in common libraries developed in-house for >> other projects in the same product line. > > I think that is a ^^^^^^^^^^^^^^^^^^^^^^^ key factor. If your > projects/products are similar, then you can benefit from reusing > code written (designed) to solve a "very similar problem" in > a sister product. Because the needs *tend* to be the same and the > *environment* tends to be the same.
Not always the case but a new project should review previous work to explore for anything that may be useful. Whatever is turned up then needs to be reviewed in the light of the requirements of the new project. It is much like choosing hardware components and you need as much of a data-sheet for software components as you do for the hardware ones.
> I wouldn't, for example, feel comfortable porting code from > a data logger that was designed to run on stripped down > hardware to a "PC platform"...
Certainly not without a suitability review being conducted first. Some modules may be useful (with or without modification). It won't be simply ported that's for sure. -- ******************************************************************** Paul E. Bennett...............<email://Paul_E.Bennett@topmail.co.uk> Forth based HIDECS Consultancy Mob: +44 (0)7811-639972 Tel: +44 (0)1235-510979 Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. ********************************************************************
Hi Paul,

On 6/16/2011 12:51 PM, Paul E. Bennett wrote:

>>>> Many of us would have problems recognising that as a definition of >>>> re-use. Your definition is more like COTS* usage to me. >>> >>> Agree. >> >> While I understand (both) your point, I can see the OP's intent, >> here. How do you differentiate a third party COTS product from >> a product that the guy who sat in your cubicle *last* year wrote!? > > For my projects, the difference between a COTS product, supplied with just > the basic documentation (descriptive specification, user guide and perhaps a > trouble-shooting guide) and the in-house re-usable modules is vast. > > Our in-house developed modules are available with full source code, original > design documentation, test patterns and certification. Additionally, the > review notes, change documentation and full version history will be a matter > of record.
Exactly. And, for the inevitable "things that fall between the cracks", you can always chase down the folks involved with it (at inception or since) for clarification. I don't want to paint the OSS (or "for purchase") folks with an overly broad brush but, take a *critical* look at the "product" they produce -- when you can do so *thoroughly*. Yeah, it *might* work. But, how comfortable would you be betting your product (or your *company*!) on it "for the long haul"? Three first-hand examples (of "many"?): I was an early adopter of Jaluna (v1). It had many of the features I was looking for (for a project at that time). And, a *supposedly* good pedigree (Sun/Chorus). Sure, you could get a system running by following the (detailed) steps provided. You could build a custom kernel. And, some toy apps. But, once you started poking around under the hood, the proverbial fan took a direct hit! Why does every project think they have to invent their own, completely AS(s)ININE build system? Is there some **really** **really** overwhelming reason why make(1) won't work for you? Especially with gobs of cheap disk space and CPU?? I.e., try to fudge the Jaluna build process and you will shoot yourself in *both* feet -- with a BAZOOKA! And, since it is so odd, you never are 100% confident that it really is rebuilding all the correct dependencies. So, better safe than sorry and "make all" (*I* sure don't want to spend a day chasing down a bug only to discover some module is out-of-date!) As for the documentation: man pages were missing or contained glaring errors. (OK, the fact that you want your man(1) pages to sit in HTML format is only mildly annoying...) The code itself looked incomplete. E.g., /* XXX FIXME XXX */ And, comments were sparse -- if at all. I spent a day trying to sort out why their fdisk was misbehaving (because nowhere does it tell you everything that is going on in the process). Similarly, I got on the Inferno bandwagon early on. Actually *paid* for a commercial license before they open sourced it. Again, a great pedigree -- Bell Labs, Ritchie, Pike, etc. And, the basis for some *commercial* products! Delightful system, conceptually (though I think there are areas that need to be rethought -- and, they've already made "about faces" in some regards). Again, the documentation was abysmal. Looking through the code, you'd see things like: my_function() { /* trust me ... */ followed by pages of uncommented code. Or, comments that made sense *solely* to the author but were obviously not intended for the benefit of anyone who followed... I limped through the design of a system built on this. And, was totally disappointed with the resulting performance (to be fair, I *could* easily have thrown more horsepower at it... but, I shouldn't have *had* to! It would be like using a 3GHz PC *just* so you could write your chess program in Java whereas an old klunker could run the same *algorithms* comfortably in C/Lisp/etc.) As a result, I took the concepts from that solution and re-applied them to a more conventional implementation. Better performance, easier to maintain, etc. [Nota Bene: *design* reuse, clearly not *code* reuse (since Limbo isn't quite C] This brings me to yesterday... :> Re: my "cell phone/tablet camera" thread. Since there are currently applications that can "photograph" a QR barcode on a mobile device, maybe someone has already *publicly* solved this (or part of this) problem? Take a walk down Roberto's basement while he's not home... "Heh heh heh... silly man! Did he think he could *hide* from me the secret of which books to touch to gain entry??!" Sure as heck, "ZXing" claims to do a good portion of what I want! [Yikes! 64MB download just to photo-decode barcodes???] Unpack the ZIP archive and, as to be expected, *no* documentation. "Um, excuse me, can someone please point me to 'main()'? And, if it's not *too* much trouble, can you give me a brief rundown on the algorithms that are used so I'm not just looking at a bunch of number crunching without any context...?" This *despite* the fact that it is "included in some Nokia (?) phones". This suggests it works (at least "somewhat"). And, also makes you wonder what that vendor's standards are for software quality! (maybe they have their own internal version that is better documented?) So, I have to ask myself, how much time do I want to spend sorting through other people's "problems" and what's the potential *reward* for that investment? Will I sink a lot of time into something only to discover that it was a "hobbyist" effort? "Hmmm... but Nokia (?) did, so maybe its worth the effort?" Then I remember: Chorus/Sun... Bell Labs/Pike/Ritchie... will I be better served coming up with a solution that fits *my* needs (does ZXing work if the camera is in motion? does it expect the user to preview the images? does it eat gobs of CPU as it is written in Java?) instead of belatedly discovering that it *doesn't*? [please note that I am not trying to disparage any of the "products" mentioned, here. I was obviously drawn to them because they *are* attractive. I wish *all* of them success! But, I also have to worry about getting product out the door... *reliable* product!]
>> A necessary prerequisite for reuse is a complete "understanding" >> (documentation) of what that code does and is intended to do and >> what it *won't* do. And, in which environments those claims >> apply. > > A sentiment with which I am in complete agreement.
Yet, despite however pedantic and thorough you *hope* to be, it is always *embarassing* what sorts of details you will overlook. Assumptions that you don't even think about because they are so *basic*. E.g., I always use simple problems to explain how computers work to "lay folk". I.e., how they simply (rigidly!) follow directions (but very quickly :> ). I often use changing a car tire as an example -- because it is easily identified with and not complex in nature. I start out by describing the steps like: - remove hub cap/wheel cover - loosen lug nuts - jack up car - remove lug nuts - remove wheel ... (install new wheel) Everyone will agree with this. Until I ask, "How did you do that from the driver's seat?" (i.e., you never exited the vehicle). So, you prepend "exit the car" to the list (in detail: open door, step out, close door) -- at which time I add, "and get run over by a passing 18-wheeler!" (because you forgot to check to see if it was *clear* to exit the vehicle -- that's called a *bug*! :> ) Someone interested in the discussion will then start thinking about the problem in finer detail. And, eventually, they will realize just how many "little details" are involved in this "simple" activity. Details that you don't even consciously acknowledge when performing the task. And, would *easily* fail to mention to "a visitor from another planet" looking for information on how to do this. The same is true with almost everything we do. "What have I implicitly *assumed* here? *That* is what's going to eventually bite me!"
>>> Same for me - In my current project I would say 70% of the code is >>> reused. Most of it is in common libraries developed in-house for >>> other projects in the same product line. >> >> I think that is a ^^^^^^^^^^^^^^^^^^^^^^^ key factor. If your >> projects/products are similar, then you can benefit from reusing >> code written (designed) to solve a "very similar problem" in >> a sister product. Because the needs *tend* to be the same and the >> *environment* tends to be the same. > > Not always the case but a new project should review previous work to explore > for anything that may be useful. Whatever is turned up then needs to be > reviewed in the light of the requirements of the new project. It is much > like choosing hardware components and you need as much of a data-sheet for > software components as you do for the hardware ones.
Exactly. The firm at which we employed "Standard Product" (software) was simply extending their *hardware* component base (subsystems) into the software realm. "Hey, if we can have a 3 axis motor control drive, we can also have a 3 axis *servo* controller software package! And, the two need not necessarily be tied to each other!" In small "brain trusts", you can do this sort of review informally. People remember what particular projects used and their drawbacks so could recommend or rule them out while chatting "at the water cooler". Then, the surviving candidates could be researched in greater depth.
>> I wouldn't, for example, feel comfortable porting code from >> a data logger that was designed to run on stripped down >> hardware to a "PC platform"... > > Certainly not without a suitability review being conducted first. Some > modules may be useful (with or without modification). It won't be simply > ported that's for sure.
On 16/06/11 17:53, Tim Wescott wrote:
> On 06/16/2011 12:14 AM, David Brown wrote: >> On 16/06/2011 02:57, Don Y wrote: >>> Hi David, >>> >>> On 6/15/2011 5:03 PM, David Brown wrote: >>> >>>> /All/ versions of Code Composter (can I borrow that name?) have a >>>> serious design flaw - uninitialised statically allocated data, which >>>> should be initialised to 0 according to the C standards, is left >>>> uninitialised. That caused me a lot of pain before I discovered it was >>>> the CCS compiler that was flawed, not my code. >>> >>> Huh? Can't you fix that in crt0.s? >> >> You can add a ram-clear loop to a hook in the startup sequence. And you >> can also work around it by explicitly initialising data to 0. But my >> point is that CCS has a clear design flaw which is in direct >> contradiction to the C standards, and which would be trivial to fix. It >> is even documented in their manuals that they know this behaviour is >> non-standard. > > You misspeak: that is one of many clear design flaws. 'double' is 32 > bit, as well, which wreaks havoc if you happen to have your own reusable > library code lying around that actually needs 'double' to conform to the > ANSI standard. >
It's true that C requires 64-bit doubles (or, technically, it requires at least 10 digits of precision - which you can't get in 32 bits). However, while it would be nice to have support for full doubles, having 32-bit "doubles" is very common on smaller embedded targets and should not come as such a big surprise. But I didn't mean to imply that the bss issue was CCS's /only/ design flaw. Amongst others are that the current version is based on a heavily modified ancient version of Eclipse, rather than as plugins for modern versions. That means you miss out on the last 5 years or so progress in Eclipse (and there's been a lot), and that it is stuck on Windows only. But I gather that the next major version will have a more current Eclipse - at least in this case the developers know they can improve the situation. What bugs me most about the bss issue is that they know about it, yet refuse to do anything about it.
In article <1f35ecfd-3769-482b-8198-7065c1c6d028@e7g2000vbw.googlegroups.com>,
steve  <bungalow_steve@yahoo.com> wrote:
<SNIP>
>know what they did (and is generally intellectual property so you >couldn't find out if you wanted to).
I hate the term Intellectual Property, (It is not property, although ip may be a legal term by now.) but anyway. If something is ip, then you can found out what it did by two means: - inspecting the source - reverse engineering Because IF IT IS INTELLECTUAL PROPERTY THAT DOESN'T MEAN IT IS CLOSED SOURCE! The whole Open Source movement and the whole Free Software movement is build on the notion that we can control open source ip. Regards inspecting the source and reverse engineering, both activaties are done one a wide scale. It is very hard to prove those victimless crimes, as the poor dissected chip is not a legal party to file a complaint. If you come in the open with the information, that is a different matter. But you may disassemble or single step XXXXXX.DLL and decide that it is a piece of garbage that you won't have in your project, and nobody would be the wiser. -- Groetjes Albert -- Albert van der Horst, UTRECHT,THE NETHERLANDS Economic growth -- being exponential -- ultimately falters. albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
On 19/06/2011 18:14, Albert van der Horst wrote:
> In article<1f35ecfd-3769-482b-8198-7065c1c6d028@e7g2000vbw.googlegroups.com>, > steve<bungalow_steve@yahoo.com> wrote: > <SNIP> >> know what they did (and is generally intellectual property so you >> couldn't find out if you wanted to). > > I hate the term Intellectual Property, (It is not property, although > ip may be a legal term by now.) but anyway. > > If something is ip, then you can found out what it did by two > means: > - inspecting the source > - reverse engineering > > Because IF IT IS INTELLECTUAL PROPERTY THAT DOESN'T MEAN IT IS > CLOSED SOURCE! The whole Open Source movement and the whole > Free Software movement is build on the notion > that we can control open source ip. > > Regards inspecting the source and reverse engineering, both > activaties are done one a wide scale. It is very hard to > prove those victimless crimes, as the poor dissected chip > is not a legal party to file a complaint. > If you come in the open with the information, that is a > different matter. But you may disassemble or single step > XXXXXX.DLL and decide that it is a piece of garbage that > you won't have in your project, and nobody would be the wiser. >
(IANAL, and rules may vary from country to country.) Reverse engineering and other inspection is not a crime. It might be against a EULA or other license or contract, which makes it illegal but not a crime. Like copyright infraction, which is not a crime (and certainly not "piracy"), you can be sued for economic or other loses by the injured party, but it is not a crime (meaning you are prosecuted by the state, and can be jailed) unless you are economically motivated and working on a reasonably large scale. (There are other exceptions where your activities are a crime if you live in the land of Micky Mouse laws.) If you are doing the reverse engineering for the purposes of compatibility or interaction, then it is in fact legal regardless of what the EULA says - most EULAs contain clauses that are not legally enforceable.

The 2024 Embedded Online Conference