Hi people!
Inspired by a recent thread started by @MaxMaxfield ( https://www.embeddedrelated.com/thread/11327/is-char-signed-unsigned-and-is-on-signed-defined?pw=Amxh9 ) and a post of @CustomSarge in the thread, I was encouraged to ask the following:
Recently it was announced that the already operational F-35 Fighter has 800+ software bugs, some severe.
We know that the majority of the Fighter's software was written in C and C++.
Do you think that it would be feasible to write the Fighter's software in assembly language?
If so, do you think the software bugs' quantity would be larger or smaller?
What are your thoughts on this matter?
It should be said that my knowledge of military systems' electronics is zero ( absolute ).
Thanks in advance!
If you are programming anything more complicated than an 8051 (or a processor of similar capabilities), stick with compiled languages. I have written systems in assembler for several machines and other than a learning experience, it would not be my first choice.
Compiled languages are mature enough that the base language constructs create good code, possibly better than you or I could write. The people who wrote the code generators have a better grasp of the multitude of addressing modes of the base instructions.
The only weak point that I have found in modern compiled languages are some of the extreme optimizing options. They make work very well, but, once in a while, you can describe a statement which generates junk code. And, this gets more rare all of the time.
Finding bugs in assembler can be quite a chore. You absolutely need to learn to think like a computer to step through the code and see where something is wrong. Keeping those pre-conceived ideas of what something is supposed to do and what it really does can lead to overlooking the error. DWIM (Do what I mean) is a big danger. Keeping register usage organized can be another problem.
Stick with compilers.
In 1975 (before C and C++), I worked at a company which was a pioneer in credit card charge authorizations and everything had to be written in assembler for speed and size. We used UNIVAC machines and the assembler was Symbolic Programming for UNIVAC Real Time (SPURT). They had a system of little modules in assembler to handle each step in the authorization. We were given a specification of what the incoming data was and where it was (register pointers) and what was coming back placed at some other location. Easy enough. Compared to today's processors, it was at the 8051 level. They lost a big client. I was on that project. I lost my job and became an ex-SPURT programmer.
This is not exactly an answer to the original question, but here's a trick to use just in case you find yourself "forced" to write a function or two in assembler:
Write the function in C first, then obtain the corresponding (possibly unoptimized) assembly language equivalent from the C compiler itself. Then, tune as appropriate.
An example of a function one may choose to write in assembler is a busy-wait delay function to be used during boot, i.e., before any timers have been initialized. Given the CPU clock frequency, a delay of "T" milliseconds (or even microseconds) can be achieved by executing some number "N" of NOP instructions. Writing the loop to execute "N" NOPs is easy, but function call/return and computing "N" can be a bit trickier, so enlist the C compiler's help. (A bit of a silly example, true enough, but you get the idea.)
Hi, cprovidenti!
I must confess I've used this trick many years ago, but it has 20+ years that I don't write a line in assembly language, mostly due to the lucky of being able to use a processor fast enough and with sufficient memory to fit and execute the code written in a high-level language.
To be honest, sometimes I have to open the startup.asm file to include Intellectual Property Protection to the code, but I copy the code snippet from someone instead of writing my own. :-
Thank you for your reply, dnj.
Your experience as an assembly language programmer is very enlightening.
We all know that one of the strongest points of high-level languages and moreover Object-Oriented Programming is the portability.
And now we can add job protection :-)
I would suggest that the jump to writing code, be it in assembler or a higher level language is the biggest problem in large scale software development such as that described above. The first requirement of reliable, high quality software is to make sure you have a precise description of the problem to be solved and the environment in which it is to operate. Once you have a problem and operating environment precisely described you can decompose the problem into components with precise interfaces using the principles of layering and partitioning. This allows you to construct data definitions and data flow diagrams so that you can describe at a high level the complete operation of the system. From this you can outline the algorithm's needed ( and choose efficient algorithms) by writing very high level pseudo code. A side effect of this process is that it also allows you to find the similarities in data flows and algorithms so that you can see how to minimize function duplication. This documentation allows new members of the development team to come up to speed rapidly and to be able to take a component and write it (i.e. code it) because the interfaces, data definitions and algorithms have been described clearly and simply. It also allows you to determine where to make the changes that always arise in a complex system under development. Coding in such systems is often rapid and of unusually high quality.
Using these principles, a small team built a large scale "fluids management system" for Aramco in 5 months that was slated to be completed in 18 months. Each member of the team was given a book with the Data Definitions, Data flow diagrams and pseudo-code for the algorithms and assigned a part of the implementation. It took 3 people about 2 months to develop the documentation. In 8 weeks all the code was completed. It was compiled (and linked) and it worked right away. Testing found under 20 bugs in the system ( mostly caused by ambiguous data definitions). The system was delivered after a month of testing. It replaced a billion dollar system then in place.
The key point is that too often an imprecise understanding of the problem space ( the problem and its operating environment ) coupled with problem partitioning on the fly leads to a plethora of bugs. Also, the larger the team, the more important it is that the interfaces between components be well defined. This is why data definitions and data flow diagrams are so important. Finally the creative use of system layers and partitioning uncovers functional similarities so that reusable components emerge allowing efficiencies in implementation and operation. All this boils down to clarity of understanding leading to clarity of communication.
Most bugs occur because the problem to be solved was either not clearly defined or was mis-understood by the programmer. Another large number of bugs in software is because the software is being used in an environment not anticipated by the designer or programmer ( e.g. the famous year 2000 bugs). This is often because the client subtlety changes the problem space or redefines key requirements. It also occurs because the programmer does not understand the operating environment. I once found a "Write to operator"(WTO) call in code where all the interrupts had been turned off. Since the WTO blocks waiting for the operator to respond; and since all interrupts were masked, the system simply froze.
In summary. What matters in software development is a clear understanding of the problem to be solved and the environment in which the software is to be run. Describe those precisely and completely and then use good tools to communicate this understanding to the entire team. High quality reliable code will emerge.
Hi, vbhunt.
Thank you for replying.
I, as an Embedded Engineer, dream all days with coherent, complete, and clear specs, but my experience says it's utopia!
Some related quotes:
“Much of the essence of building a program is in fact the debugging of the specification.” - Fred Brooks
“Walking on water and developing software from a specification are easy if both are frozen.” - Edward V. Berard
Cheers!
Absolutely Right. Sometimes you have to insist that the specifications be clear. The easiest way to do that is to find the ambiguity in the specs and show them to the client. It is easier to explain that time and money are lost because you have to plan to implement all the ambiguous options when presenting them with the exploding tree of implementations hiding in the ambiguity. The biggest job you have as a developer is to "debug the specification!" I've set in numerous meetings making sure that the developers understood what the client wanted; often exposing the ambiguity and then taking it back to the client for clarification before moving into coding. Seems slow but actually saves a bunch of expensive development time.
Don't conflate freezing a specification with understanding the requirements and the operating environment. You may not be able to "freeze the stream" but you'd better understand the crossing requirement well enough to establish where to put the solid stepping stones across it. A good understanding of the problem space and operating environment allows you to find the architectural invariant principles from which you can construct stepping stones even if you cannot freeze the stream. "The battle plan survives until the first skirmish with the enemy, but he plan principles survive and are the key to victory or failure."
Excellent Comment!
Howdy, Dilberto. My suspicions are that the overall complexity of the code precludes sufficient modularization to build assembler versions. However, IF it could be compartmentalized by levels as a hierarchy / flow chart, it stands a chance. So doing would allow isolation of proper function by module.
I'd LOVE to see how they Really write the code now... I mean do they sub-processor Everything, as I'd be tempted? Modular blocks of hardware controlling a unitary function. Within each, modular code blocks performing atomic tasks, coordinated by a supervisory routine. Then networked to the next higher level. Then again this IS government.
Oh Well <<<)))
Hi, CustomSarge!
First of all, thank you for answering.
I think that modularization can be achieved in assembly language as well as at a higher-level one. As you said, "IF it could be compartmentalized by levels as a hierarchy / flow chart, it stands a chance". I think it's a big chance.
In everything, but mainly in military affairs, one must match the strategy to be used with the resources and time available, hence the first part of my question, "would it be feasible to write the code in assembly language?"
That wasn't a technical question, but a resources x constraints' one. Sorry for have been unclear.
The second part of the question, about the number of software bugs, is related to the fact, as far as I know, that we have much more and well-developed tools to aid writing of good code in high-level languages than in assembly language.
Perhaps the effectiveness of these tools makes a significant difference in hunting and killing the bugs.
Besides this all, there should be other factors that could impact the metrics, but I think this is beyond the scope of this thread.
Cheers!
I thought the DoD was still using Ada, which has a type-safety architecture that makes C/C++ look almost as sloppy and lazy as Python? Anyway, anything as complicated as an F-35 is bound to have bugs in it, software and otherwise. The Space Shuttle had five IBM AP-101 computers running, and majority voting to solve software glitches. NASA used its own high-level language called HAL (no, not THAT HAL...) to program the computers. The book "To Orbit and Back Again: How the Space Shuttle Flew in Space" describes the Shuttle systems in glorious engineering detail. It has this paragraph in it:
"The development by NASA of HAL was criticized by managers in a community used to assembly language systems. The felt that it would have been better to write optimized code in assembly language rather than produce less efficient software by a high level language. To settle the controversy, NASA told two teams to race against each other to produce some test software with one team using assembly language and the other using HAL. The running times of the software written in HAL was only 10 to 15 per cent [sic] longer than its counterpart in assembly language. It was therefore decided that the system software would be written in assembly language, since it would be modified only very rarely, and the application software and the redundancy management code would be written in HAL."
A few decades ago I commonly wrote in assembly. Today, hardly ever, unless it's for something tiny like a Xilinx Picoblaze or something and I need that 10% to 15%. What's changed is the compilers...they surpassed me in machine code brilliance about 15 years ago, particularly as the processors have gotten larger and more complicated.
Here's an example of what I write of: I needed to transfer ~300KB frame buffers on an ARM from one place to another regularly. The frame buffers were 16-bit 565RGB values, and the ARM had a 32-bit architecture with a 16-bit data bus. Using the principle of get it working first, optimize it second, I wrote a simple *pDest++ = *pSrc++ style loop with pointers to 16-bit words. Slow, but worked fine. Then, knowing the ARM was much more efficient at its 32-bit word size, I used 32-bit word pointers, and saw a 25% improvement. So far, so good. I then used memcpy() from the C library just for giggles. I was flabbergasted when I saw a whopping 400% speed increase. Really, from a library function call? Naturally, I dumped the assembly listing to see what was going on. The cross-compiler and library I was using (gcc/glibc) was smart enough to know exactly what ARM I was targeting, and had optimized the memcpy() call into ARM assembly language something like this pseudo-code:
load source ptr into cache-fill register address
load destination ptr into cache-flush register address
load frame buffer size / cache-buffer size into count register
while count register--
use single-instruction cache-fill command to burst 16 bytes from DRAM
use single-instruction cache-flush to burst the cache line back
++source and destination pointers
In essence the code was using its knowledge of the hardware to burst 16 bytes (two bus clock cycles for address setup, than another 8 clock cycles using single-cycle DRAM burst timing to read 16 bytes over the data bus) directly into a level 1 cache line, and then immediately flushing that cache line back out into the destination address. The loop only consisted of about five processor instructions and easily resided entirely in another few cache lines--no external code fetch needed. The overall data bus utilization was about 96%, with the 4% being DRAM address setup and refresh. There was no way I was ever going to write anything faster than that.
It was around that time that I mostly stopped writing code in assembly.
Hi, Bob11!
Fantastic example!
When I began using ARM, specifically ARM-7 a bit more than 10 years ago and Cortex M0/M3 soon after, I was already a high-level language programmer.
Been using a high-level language, I realized that no matter how bad a programmer I was, the low-cost processor selected for the project ran the software with much less time than the maximum allowed and was far from exhausting memory.
Hence, I concluded that for me the assembly language era was definitely gone.
This doesn't invalidate the fact that many projects, by their restrictions, still must use assembly language, as the NASA's space shuttle and your tinny projects you have mentioned.
Thanks a lot for sharing this with us.
> Do you think that it would be feasible to write the Fighter's software in assembly language?
Yes
> If so, do you think the software bugs' quantity would be larger or smaller?
Larger, unless significantly more development man-hours are spent on the project.
> What are your thoughts on this matter?
Those uncomfortable with C will argue they can do a better job in assembler, those uncomfortable with C++ will argue they can do a better job in C, and so on.
The idea that a lower level language is conducive to fewer bugs is folly. To quote (as best I recall) a very smart fellow named Tim Coe, "Anyone who has designed has designed bugs."
Programming languages don't create bugs, just as surely as spoons don't make people fat. Programmers design bugs for a variety of reasons, almost none of which are absent with a lower level language.
About the only source of bugs you do remove is unfamiliarity with the language itself. That is not a language problem, nor is it a valid argument against the language.
To focus on the language is to ignore the many other sources of bugs such as clarity of requirements and functional definition, schedule pressure, quality of verification plan, level of verification effort, and so on.
Hi, matthewbarr!
Thanks for answering.
Agree with you. Much you've placed here are the "other factors that could impact the metrics", as I have said in the answer to CustomSarge.
For me, the best quote about software bugs is :
"I have never made a mistake writing software because I do not write software" - Bob Pease, a genius analog designer.
Feasible: Maybe.
Economical: Doubtful.
It would have more bugs, since more lines of code would be required. Note the second bullet point.
Via:
https://www.youtube.com/watch?v=tcyb1lpEHm0&t=825s
Best regards,
Matthew
Hi, MatthewEshleman.
Thank you for answering.
I got your point. As the rate of bugs per line of code is constant and relatively independent of the language, a program with more lines of code should have more bugs.
But I think this is an oversimplified view of the problem.
My first question is: Assembly language was considered in the survey or just high-level languages?
Quoting an American computer scientist :
"I do not consider an assembly language ( even a sophisticated one ) to be a programming language. This view differs from that held by some people who maintain that anything in which programs are written is a programming language" - Jean Sammet, developer of the FORMAC programming language in 1962
Maybe those that have conducted the survey are of the same opinion.
Second, what was the scope of the projects analyzed?
The methods and technics for writing software for the embedded world are quite different from the ones used for business software, and I think assembly language is rarely used in business software, if so.
For future code readability and maintenance, every time, use compiled language.
Not, by the way Java or C# or any other byte-code language. They are totally unnecessary and a waste of processing power.
Hi, SpiderKenny!
Thank you for answering.
:-) :-) :-) :-) :-) :-) :-) :-) :-)
Putting embedded programmers and web programmers together for dinner and having them discuss their tools and processes would be like watching Kirk and Chang at the dinner in the Star Trek VI movie.
As you're on my side, you get an up-vote :-)
The solution is not in the language, it is in the skill of he programmer. The devices that I design are hard real time and are used in the public safety sector, so must be reliable and are subject to NHTSA testing. I cut my teeth on Assembly language, for many different processor families, and maintained that code well into the turn of the millennium. I have also developed code in C for decades and can honestly say that similar devices that I have coded in Assembler and C have maintained similar performance. The most important tool in an engineer's toolbox is his brain. We do not want that to become a lost art and should take the time to pass this wisdom on to the younger generation. Creating our art using high level languages without the intimate knowledge of register level processes and bare metal BIT banging is no different than thinking that grits grows on trees.
For large projects there are many programmers, and the skill of "the" programmer becomes irrelevant because it is not one person doing the work. Modularity, good design, maintainability, testability, etc. become much more important.
Modularity, good design, maintainability, testability, etc ... are the hallmark of a good programmer, no matter the size of the task. I hope that we are not expecting good code to magically materialize from novices using a high level language, simply because there are many coders working on a project, and making skilled practitioners of the art irrelevant.
Hi Imitcham!
Thank you for answering.
I agree with you. With caveats.
"The most important tool in an engineer's toolbox is his brain"
Some quotes that corroborate this taught:
“It is not the language that makes programs appear simple. It is the programmer that make the language appear simple!” - Uncle Bob
"When the language becomes a problem, get a new programmer and the problem strangely disappears." - unknown author ( at least for me ).
Each programming language is a tool. And like the other types of tools, each one is more suitable for a determined job than others.
It's true that today we have libraries, ecosystems, and auxiliary tools that minimize the differences among the various high-level languages we can use, but I think some languages have more looser rules that turn them more prone to errors if the programmer is not aware of it.
And I agree that even using exclusively high-level languages, it's a must to have at least some idea of what the underlying hardware is.
And we must say to the young that grits don't grow on trees. :-)
Well !!!!, being that I have worked on gov projects before, and definitely not being in the know of what the requirements are. My first position would be a question if your looking for a specific function and and specific output result, assembler has it's uses and it's less instructions less MIPS then others. But "C" code is definitely more flexible in creating parms and functions quicker then assembler from a build perspective. Reducing the bugs, is another whole kettle of tea! I would say assembler gives me a headache (major!!), do I still use it with known quality functions that work every time yes I still do. But I combine them with "C" code by calling and extern and embed them in my code.
My other main point is how was the code written in "C", if the dev's have no structured C code this is an issue, I've seen code out there with goto's"sigh" out there YEP there's where your bugs come from. Quality structured software in general contains less bugs as the logic explains itself. It also for me allows me to see the holes in my code and whats missing, like another clk routine I should have had and other little tidbits that really matter.
800 bugs I really didnt want to know that, cause I Don't want my son flying that thing. 1 bugs enough with that complex of interfaces, multitudes of kitchen sink functions and gadgets. I truly feel sorry for the developers, I'm glad I no longer get involved with those kinds of projects. what cprovdenti said is actually a good point for "C" code reviewing, the assembler code generated by the C compiler is a good place to start.
In short nope don't head down the assembler path. but thats just my 1 cent, retirement i had to cut it by 1 cent (lol)
Hi, gillhern321!
Thanks for answering!
"I Don't want my son flying that thing." :-)
About the quality of software, I think if the automobile industry uses a tight standard as MISRA, even that I don't know the Defence industry, I bet it should use a much more tight set of rules and be much more careful. I doubt there is a single GoTo statement in the software of the fighter.
I actually used the fighter's case as material support for this discussion.
Cheers!
Hi There, This is going to sound like a very simple answer. One can debate this. But, ultimately, all high level programming languages are compiled to assembler, before the instructions become binary machine code. At scale, all of these languages write better assembly code than you, or any other human, or any other group of humans can. Thousands of developers have spent hundreds, if not millions of hours to make it so. Just thank them, choose one or more of the high level languages, and learn to use them well.
Hi, jeghartman!
Thank you for answering.
I partially agree with you.
No, this is not a question with a very simple answer. See others' comments.
But, depending on the scope you are working, yes, you can choose the high-level language that is most adequate to the problem you're solving and forget about the rest. This is not always the case.
And I also agree when you say nowadays compilers can write better code than any programmer or group of programmers, but this is useful just when you have no constraints that will force a very specific solution, which happily is the case in the majority of problems.
Cheers!