EmbeddedRelated.com
Forums
Memfault State of IoT Report

filling remaining array elements with fixed value

Started by blisca June 12, 2014
On 6/12/2014 10:21 PM, hamilton wrote:
> On 6/12/2014 11:20 PM, hamilton wrote: >> On 6/12/2014 3:24 AM, Wouter van Ooijen wrote: >>>> #define TEN_FFS 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF >>>> const unsigned char my_array[8]={
should have been (assuming 100 [sic]): const unsigned char my_array[100]={
>>>> 0xA, 0xB, 0xC, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF >>>> TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS,
should have been: , TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, (else you have 110 initializers for a 100 element array)
>>>> TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS >>>> }; >>>> #undefine TEN_FFS >>> >>> I misread, you want 1000, not 100, but that requires only two more >>> lines ;) >>> >> Don't you mean 200 more lines !! > sorry, 20 lines
#define HUN_FFS TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, \ TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS const unsigned char my_array[1000]={ 0xA, 0xB, 0xC, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, HUN_FFS, HUN_FFS, HUN_FFS, HUN_FFS, \ HUN_FFS, HUN_FFS, HUN_FFS, HUN_FFS, HUN_FFS }; I think you can claim this is just two ADDITIONAL lines (or even *one* if you want to get creative!)
hamilton schreef op 13-Jun-14 7:21 AM:> On 6/12/2014 11:20 PM, hamilton 
wrote:
 >> On 6/12/2014 3:24 AM, Wouter van Ooijen wrote:
 >>>> #define TEN_FFS 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF
 >>>> const unsigned char my_array[8]={
 >>>>     0xA, 0xB, 0xC, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF
 >>>>     TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS,
 >>>>     TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS
 >>>> };
 >>>> #undefine TEN_FFS
 >>>
 >>> I misread, you want 1000, not 100, but that requires only two more
 >>> lines ;)
 >>>
 >> Don't you mean 200 more lines !!
 > sorry, 20 lines
 >

You think too linear. Learn to think recursive.

#define TEN_FFS 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF
#define H_FFS TEN_FFS,TEN_FFS,TEN_FFS,TEN_FFS,TEN_FFS,\
    TEN_FFS,TEN_FFS,TEN_FFS,TEN_FFS,TEN_FFS
const unsigned char my_array[8]={
      0xA, 0xB, 0xC, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF
      H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS
  };
#undefine TEN_FFS
#undefine H_FFS

Wouter
On 13/06/14 02:09, Mark Curry wrote:
> In article <lnd9ll$nja$1@dont-email.me>, > David Brown <david.brown@hesbynett.no> wrote: >> On 12/06/14 22:07, Randy Yates wrote: >>> What you say is clearly correct, but many times tools are not "stored" >>> with the project. I don't know about others, but for myself, it's just >>> often too much a pain to make sure I get all the components of gcc/g++ >>> (or whatever) into my version control. Although on my current project I >>> have done just that. >>> >>> But I agree this is the ideal, anal way to go about things. >>> >> >> I typically don't include the tools in the version control system - gcc >> is not too bad, but trying to get something like CodeWarrior into >> subversion would be a serious pain. But we archive downloaded tools, >> install them in carefully named directories, and refer to those >> directories in makefiles. And I avoid updating tools - if I need a new >> version because of a serious bug fix, or simply to get the latest and >> greatest at the start of a new project, I install the new version in a >> new directory. We also make a point of avoiding tools that get locked >> to particular computers or have other such restrictions (floating >> licenses are a much better choice), and "archive" old development PC's. > > ...thread drift... > > We currently have a setup to do nightly builds of all our code. We've > seriously considered, but haven't pulled the trigger yet, on also setting > up a build on a virtual machine. This build on the virtual machine > wouldn't happen as often, but the virtual machine snapshot would theoretically > capture "everything". The virtual machine snapshot could then be > checked into revision control. > > Sounds like overkill, but it some industries, being able to faithfuly rebuild > something 5, 10, 15+ years down the line could be useful... >
It's not overkill, it's increasingly common practice. I have a few projects for which the tools are all installed in a VirtualBox virtual machine and run from there. The VB machine can then be snapshotted and archived. If the build "machine" is Linux rather than Windows, it's possible to be even more efficient using lightweight virtual machines (chroot jail, openvz, or now docker) - the "machine" for archiving can be just a few 100 MB's of files.
On 13/06/14 06:05, Niklas Holsti wrote:
> On 14-06-13 03:09 , Mark Curry wrote: >> In article <lnd9ll$nja$1@dont-email.me>, > >> ...thread drift... > >> We currently have a setup to do nightly builds of all our code. We've >> seriously considered, but haven't pulled the trigger yet, on also setting >> up a build on a virtual machine. This build on the virtual machine >> wouldn't happen as often, but the virtual machine snapshot would theoretically >> capture "everything". The virtual machine snapshot could then be >> checked into revision control. >> >> Sounds like overkill, but it some industries, being able to faithfuly rebuild >> something 5, 10, 15+ years down the line could be useful... > > How sure are you that your virtual machine snapshot, taken in 2014 on > your current PC and hypervisor, will run on your brand-new PC in the > year 2029? > > As I understand them, what are called "virtual machines" on PCs only > virtualize as little of the machine as is necessary to support multiple > OS's on the same hardware, but are not full emulations of the PC > processor and I/O. I have not seen any promises from hypervisor vendors > to support 15-year-old VM snapshots on future PC architectures, which > may be quite different. > > This question is of interest to me because I am working on projects with > maintenance foreseen until the 2040's. Some people have suggested > virtual machines as the solution for keeping the development tools > operational so long, but I am doubtful. >
I would recommend a few things here. First, consider using raw hard disk images rather than specific formats and containers - the tools for working with raw images will always be around (a loopback mount in Linux is usually all you need). Most hypervisors and virtual machines can work with that. Secondly, aim to use KVM on Linux as your hypervisor. I haven't used it myself - I use either VirtualBox for full emulation or OpenVZ for lightweight emulation. But KVM can emulate a lot more than other systems. While it is most efficient when the target and the host cpu are the same, KVM can handle a mismatch, using QEMU as a cpu emulator when necessary. If Intel goes bankrupt and your 2040 machine runs on PowerPC chips, KVM will let you run your x86 virtual machine images. Also, KVM is entirely open source. You won't have to face vendors in 20 years time asking for old licenses for their old products - you can archive Linux and KVM (it is in the kernel, but there are usermode tools as well) as both source code and installable media, and rebuild machines in the future. And do as much as you possibly can with open source software - both on the hosts and inside the virtual machines. And then archive a physical machine or two as well, just to be safe :-)
On 13/06/14 04:39, Randy Yates wrote:
> Grant Edwards <invalid@invalid.invalid> writes: > >> On 2014-06-13, David Brown <david.brown@hesbynett.no> wrote: >>> On 13/06/14 01:15, Hans-Bernhard Br?ker wrote: >>>> On 13.06.2014 00:20, Don Y wrote: >>>> >>>>> Write it in the same language that you are compiling -- that way you >>>>> *know* you have THAT tool available wherever you happen to maintain >>>>> the codebase (instead of having to have *two* tools). >>>> >>>> Except when you don't, which would tend to apply to people in this >>>> newsgroup more than any other group. >>>> >>>> Just because you're already writing embedded software in C doesn't mean >>>> you'll also have a "native" C compiler for your desktop OS anywhere near >>>> you. >>>> >>> >>> If you are using Linux then you will always have a native C compiler >>> handy. >> >> Nope. Plenty of distros don't install a C compiler by default. >> >>> Of course, you will also have Python, which is a much nicer language >>> for this sort of scripting (it's quick to learn enough Python to >>> write such scripts). >> >> But the _do_ install python by default. > > In this day of package managers, it's practically irrelevent. If it > isn't installed by default, and you can type three words - "yum install > gcc" - you're done. >
On many systems, you only need /one/ word and a letter - "gcc", then answer "y" to the prompt asking if you want to install it. But Grant does have a point, which could be relevant if developers don't have root (or sudo) access to their own machines, and are given minimal installs.
On 13/06/14 02:53, Don Y wrote:
> Hi David, > > On 6/12/2014 5:04 PM, David Brown wrote: >> On 13/06/14 01:15, Hans-Bernhard Br&#4294967295;ker wrote: >>> On 13.06.2014 00:20, Don Y wrote: >>> >>>> Write it in the same language that you are compiling -- that way you >>>> *know* you have THAT tool available wherever you happen to maintain >>>> the codebase (instead of having to have *two* tools). >>> >>> Except when you don't, which would tend to apply to people in this >>> newsgroup more than any other group. >>> >>> Just because you're already writing embedded software in C doesn't mean >>> you'll also have a "native" C compiler for your desktop OS anywhere near >>> you. >> >> If you are using Linux then you will always have a native C compiler >> handy. Of course, you will also have Python, which is a much nicer >> language for this sort of scripting (it's quick to learn enough Python >> to write such scripts). > > Regardless of host, the folks *maintaining* the code will be KNOWN > to be knowledgeable in *that* (language). Not necessarily the case > for C++, sh, perl, python, etc. > > There are often little differences in languages to which users are > completely oblivious -- that can make significant differences in > their comprehension of an algorithm expressed in a language that > they may only *casually* know. > > [E.g., ARBNO() is a lazy matcher in SNOBOL. Folks coming from a > C background (with it's typical greedy matches in regex library) > will completely misunderstand the mechanics of ARBNO and incorrectly > emulate it's function.] > > Early in my career, I was "too clever, by half" and relied on > my wider experience/tool base in crafting solutions to problems. > Mixing various tools, languages, environments to give me an > "optimal" (in terms of development effort) solution. E.g., I > was running SysV UNIX w/X at home in the early 80's -- while > others were fighting with MS, funky memory models, "overlays", > and *waiting* for (illusion of) a "multitasking, GUI environment, > etc. > > Almost all of those solutions eventually trapped me into ongoing > support roles ("But we don't have UNIX, here!" "But Joey doesn't > know perl!" "But I don't want to have to purchase..."). And, so, > after-the-fact, I found myself back-porting designs to the very > same "crippled" environments from which I had originally "cleverly" > freed myself (lest I be stuck in an ongoing support role... "life > is way too short to spend in support!") > > I'm in a similar predicament, currently: do I rely on expensive > tools that I own and "force" others wanting to maintain my > designs to also purchase them? Or, do I discard my tools and > my experience with them *just* to make it LESS EXPENSIVE for > others? > > Time to make some ice cream...
What you say is all true - but small Python scripts are usually very easy to follow for beginners, once someone else has written the original. I introduced Python to some of our other C embedded developers that way - and none have found it challenging to maintain or modify the generating scripts even though they had not seen Python before. /Mastering/ Python takes time (if it is even possible to "master" such a large programming language and library), but little scripts to generate code like this are easy. for i in range(32) : print "\t" for i in range(32) : print "0xff,", print That gives you 1024 "0xff, " for cutting and pasting into the code. Tidy up by hand to add the special values. Any C programmer who thinks that is difficult to follow has a problem. And it is not /that/ hard to add the special values to the Python, and make it generate an output file directly, so that the script can be run from make.
On 13/06/14 03:42, Don Y wrote:
> Hi Grant, > > On 6/12/2014 6:19 PM, Grant Edwards wrote: >> On 2014-06-12, Don Y<this@is.not.me.com> wrote: >>> On 6/12/2014 3:02 PM, Randy Yates wrote: >>>> Grant Edwards<invalid@invalid.invalid> writes: >>>>> On 2014-06-12, Randy Yates<yates@digitalsignallabs.com> wrote: >>>>> >>>>>> I go along with the others who suggested that you write some C/C++ >>>>>> code >>>>>> to generate this code. I've done that many times and it works well. >>>>> >>>>> I find it's usually _way_ faster to write a Python program to generate >>>>> such things, but YMMV. >>>> >>>> Yeah, sure, python, perl, common lisp, scheme, erlang, c, c++, etc. - >>>> pick yer' poison. >>> >>> Write it in the same language that you are compiling -- that way you >>> *know* you have THAT tool available wherever you happen to maintain >>> the codebase (instead of having to have *two* tools). >> >> I don't understand. If I'm writing in C for the '430, how does that >> guarantee I have C for the development host? > > It doesn't guarantee that you have it for the host. But, neither > does it guarantee that you have python, perl, sh, etc. for the host! > What it *does* guarantee is that *you* will know how to write C > for that host (more or less)! It doesn't guarantee that you will > be able to write a perl script *if* you happened to have perl > available to you on *that* host (e.g., none of my windows hosts > have perl installed). >
I have written embedded C for 20 years - but I would not be confident about using C on the host for something involving a lot of string manipulation and formatting. It's a different skill set, even though it is still C. And I know plenty of embedded programmers who have no idea how to make a host-run C program at all. So no, you don't have such a guarantee. But I can give a guarantee* that a competent embedded C programmer will pick up the basics of Python quickly, and write string manipulation and formatting code faster than learning to write host C code for the same job. And the resulting scripts will be cleaner, faster to develop, and easier to maintain. [*] My guarantee is, of course, worth the pixels it is written on!
> [I'll ignore the pedantic case where you can often use the target > C compiler to generate the required output -- albeit in a round-about > manner -- as an object for the target... that you manually > misappropriate back into your development environment!] > > >
David Brown wrote:

> On 13/06/14 02:53, Don Y wrote: >> Hi David, >> >> On 6/12/2014 5:04 PM, David Brown wrote: >>> On 13/06/14 01:15, Hans-Bernhard Br&ouml;ker wrote: >>>> On 13.06.2014 00:20, Don Y wrote: >>>> >>>>> Write it in the same language that you are compiling -- that way you >>>>> *know* you have THAT tool available wherever you happen to maintain >>>>> the codebase (instead of having to have *two* tools). >>>> >>>> Except when you don't, which would tend to apply to people in this >>>> newsgroup more than any other group. >>>> >>>> Just because you're already writing embedded software in C doesn't mean >>>> you'll also have a "native" C compiler for your desktop OS anywhere >>>> near you. >>> >>> If you are using Linux then you will always have a native C compiler >>> handy. Of course, you will also have Python, which is a much nicer >>> language for this sort of scripting (it's quick to learn enough Python >>> to write such scripts). >> >> Regardless of host, the folks *maintaining* the code will be KNOWN >> to be knowledgeable in *that* (language). Not necessarily the case >> for C++, sh, perl, python, etc. >> >> There are often little differences in languages to which users are >> completely oblivious -- that can make significant differences in >> their comprehension of an algorithm expressed in a language that >> they may only *casually* know. >> >> [E.g., ARBNO() is a lazy matcher in SNOBOL. Folks coming from a >> C background (with it's typical greedy matches in regex library) >> will completely misunderstand the mechanics of ARBNO and incorrectly >> emulate it's function.] >> >> Early in my career, I was "too clever, by half" and relied on >> my wider experience/tool base in crafting solutions to problems. >> Mixing various tools, languages, environments to give me an >> "optimal" (in terms of development effort) solution. E.g., I >> was running SysV UNIX w/X at home in the early 80's -- while >> others were fighting with MS, funky memory models, "overlays", >> and *waiting* for (illusion of) a "multitasking, GUI environment, >> etc. >> >> Almost all of those solutions eventually trapped me into ongoing >> support roles ("But we don't have UNIX, here!" "But Joey doesn't >> know perl!" "But I don't want to have to purchase..."). And, so, >> after-the-fact, I found myself back-porting designs to the very >> same "crippled" environments from which I had originally "cleverly" >> freed myself (lest I be stuck in an ongoing support role... "life >> is way too short to spend in support!") >> >> I'm in a similar predicament, currently: do I rely on expensive >> tools that I own and "force" others wanting to maintain my >> designs to also purchase them? Or, do I discard my tools and >> my experience with them *just* to make it LESS EXPENSIVE for >> others? >> >> Time to make some ice cream... > > What you say is all true - but small Python scripts are usually very > easy to follow for beginners, once someone else has written the > original. I introduced Python to some of our other C embedded > developers that way - and none have found it challenging to maintain or > modify the generating scripts even though they had not seen Python > before. /Mastering/ Python takes time (if it is even possible to > "master" such a large programming language and library), but little > scripts to generate code like this are easy. > > for i in range(32) : > print "\t" > for i in range(32) : > print "0xff,", > print > > That gives you 1024 "0xff, " for cutting and pasting into the code. > Tidy up by hand to add the special values. > > Any C programmer who thinks that is difficult to follow has a problem. > > And it is not /that/ hard to add the special values to the Python, and > make it generate an output file directly, so that the script can be run > from make.
That might be correct. But I would also say that any C programmer who writes code for embedded systems is not able to write the equivalent to your Python script in C for the host should be stopped from writing any code. -- Reinhardt
On 13/06/14 10:46, Reinhardt Behm wrote:
> David Brown wrote: > >> On 13/06/14 02:53, Don Y wrote: >>> Hi David, >>> >>> On 6/12/2014 5:04 PM, David Brown wrote: >>>> On 13/06/14 01:15, Hans-Bernhard Br&ouml;ker wrote: >>>>> On 13.06.2014 00:20, Don Y wrote: >>>>> >>>>>> Write it in the same language that you are compiling -- that way you >>>>>> *know* you have THAT tool available wherever you happen to maintain >>>>>> the codebase (instead of having to have *two* tools). >>>>> >>>>> Except when you don't, which would tend to apply to people in this >>>>> newsgroup more than any other group. >>>>> >>>>> Just because you're already writing embedded software in C doesn't mean >>>>> you'll also have a "native" C compiler for your desktop OS anywhere >>>>> near you. >>>> >>>> If you are using Linux then you will always have a native C compiler >>>> handy. Of course, you will also have Python, which is a much nicer >>>> language for this sort of scripting (it's quick to learn enough Python >>>> to write such scripts). >>> >>> Regardless of host, the folks *maintaining* the code will be KNOWN >>> to be knowledgeable in *that* (language). Not necessarily the case >>> for C++, sh, perl, python, etc. >>> >>> There are often little differences in languages to which users are >>> completely oblivious -- that can make significant differences in >>> their comprehension of an algorithm expressed in a language that >>> they may only *casually* know. >>> >>> [E.g., ARBNO() is a lazy matcher in SNOBOL. Folks coming from a >>> C background (with it's typical greedy matches in regex library) >>> will completely misunderstand the mechanics of ARBNO and incorrectly >>> emulate it's function.] >>> >>> Early in my career, I was "too clever, by half" and relied on >>> my wider experience/tool base in crafting solutions to problems. >>> Mixing various tools, languages, environments to give me an >>> "optimal" (in terms of development effort) solution. E.g., I >>> was running SysV UNIX w/X at home in the early 80's -- while >>> others were fighting with MS, funky memory models, "overlays", >>> and *waiting* for (illusion of) a "multitasking, GUI environment, >>> etc. >>> >>> Almost all of those solutions eventually trapped me into ongoing >>> support roles ("But we don't have UNIX, here!" "But Joey doesn't >>> know perl!" "But I don't want to have to purchase..."). And, so, >>> after-the-fact, I found myself back-porting designs to the very >>> same "crippled" environments from which I had originally "cleverly" >>> freed myself (lest I be stuck in an ongoing support role... "life >>> is way too short to spend in support!") >>> >>> I'm in a similar predicament, currently: do I rely on expensive >>> tools that I own and "force" others wanting to maintain my >>> designs to also purchase them? Or, do I discard my tools and >>> my experience with them *just* to make it LESS EXPENSIVE for >>> others? >>> >>> Time to make some ice cream... >> >> What you say is all true - but small Python scripts are usually very >> easy to follow for beginners, once someone else has written the >> original. I introduced Python to some of our other C embedded >> developers that way - and none have found it challenging to maintain or >> modify the generating scripts even though they had not seen Python >> before. /Mastering/ Python takes time (if it is even possible to >> "master" such a large programming language and library), but little >> scripts to generate code like this are easy. >> >> for i in range(32) : >> print "\t" >> for i in range(32) : >> print "0xff,", >> print >> >> That gives you 1024 "0xff, " for cutting and pasting into the code. >> Tidy up by hand to add the special values. >> >> Any C programmer who thinks that is difficult to follow has a problem. >> >> And it is not /that/ hard to add the special values to the Python, and >> make it generate an output file directly, so that the script can be run >> from make. > > That might be correct. But I would also say that any C programmer who writes > code for embedded systems is not able to write the equivalent to your Python > script in C for the host should be stopped from writing any code. >
That was just an easy example - specifically written to look like C (it's not the way I would normally write it in Python). When you have to generate values, parse inputs, sorting stuff, make more complex formatting, etc., you appreciate the flexibility of a higher level language. Rather than piles of strcat, strcpy, sscanf, sprintf, along with memory management of buffers, you can use Python to write much shorter and simpler code. Yes, a C programmer /could/ write that stuff in C (even if he is used to embedded C without malloc and printf) - but it is much easier and faster in interactive Python than code-compile-debug cycles with C. It is up to each development group how they handle this sort of stuff, of course. My point is merely that Python is an excellent choice of language for small scripts and helper programs, it is easy to learn (at least to that level), and is almost always better suited than host C code for such purposes.
David Brown wrote:

> That was just an easy example - specifically written to look like C > (it's not the way I would normally write it in Python). When you have > to generate values, parse inputs, sorting stuff, make more complex > formatting, etc., you appreciate the flexibility of a higher level > language. Rather than piles of strcat, strcpy, sscanf, sprintf, along > with memory management of buffers, you can use Python to write much > shorter and simpler code. Yes, a C programmer could write that stuff > in C (even if he is used to embedded C without malloc and printf) - but > it is much easier and faster in interactive Python than > code-compile-debug cycles with C. > > It is up to each development group how they handle this sort of stuff, > of course. My point is merely that Python is an excellent choice of > language for small scripts and helper programs, it is easy to learn (at > least to that level), and is almost always better suited than host C > code for such purposes.
I did not put down Python. I have not use it yet. But in comparable situations I have also used other languages for such tasks. For example REXX when my host system was OS/2. Well today I use C++ together with Qt on the host and often also on larger embedded systems. This makes many things likewise as easy. -- Reinhardt

Memfault State of IoT Report