On 27/05/14 17:30, Don Y wrote:> IME, folks who do embedded work are either hardware guys who > started writing code to prove their hardware works -- and then > got "drafted" into *doing* the code (I know of a Fortune 500 > company that had a *technician* writing the code for a large > embedded project "because he tinkered with software at home"; > the PHB was a self-confident BASIC programmer so he was *sure* > he understood these issues... <frown>) but, without a formal > software education, don't really understand how to *design* > the software (---> buggy code); > > Or, they are software folks who know squat about hardware and, > as a result, ill-equipped to understand what *can* (and does) > go wrong and, therefore, write buggy code.Or like me, they are someone that can't really tell the difference between software and hardware (well, I make an exception for analogue and RF!), and thinks that in professional life the similarities are more profound than the differences. But then I - started hardware 50 years aged, aged 7, - compiled my first Algol60 program 45 years ago, - my first asm progam 44 years ago (and only later realised I'd created a simple clean FSM) - my first soft-real-time system 37 years ago - triumphantly re-invented the concept of microprogrammed processors 36 years ago - my first low-noise analogue/optical device 35 years ago - my first hard-real-time system 30 years ago - my first semi-custom ic 30 years ago - my first OOP 29 years ago - my first web-shop 16 years ago (and recognised SOAP/Web services were a heap of caca, and that REST was A Good Thing) - my first soft real-time HA telco system 10 years ago Long ago I came to the conclusion that the world is divided into engineers and non-engineers, and that - the non-engineers should be prevented from accessing keyboards - the non-engineers should be given soldering irons in a locked room (where they can learn by hurting themselves only) - engineers should rule. Unfortunately only in China do engineers rule. (Until a couple of years ago, the vast majority of the top echelon had trained as engineers)
Hidden latencies and delays for a running program?
Started by ●May 25, 2014
Reply by ●May 27, 20142014-05-27
Reply by ●May 27, 20142014-05-27
On 27/05/14 18:30, Don Y wrote:> IME, folks who do embedded work are either hardware guys who > started writing code to prove their hardware works -- and then > got "drafted" into *doing* the code (I know of a Fortune 500 > company that had a *technician* writing the code for a large > embedded project "because he tinkered with software at home"; > the PHB was a self-confident BASIC programmer so he was *sure* > he understood these issues... <frown>) but, without a formal > software education, don't really understand how to *design* > the software (---> buggy code); > > Or, they are software folks who know squat about hardware and, > as a result, ill-equipped to understand what *can* (and does) > go wrong and, therefore, write buggy code. >That seems a bit depressing - either they are hardware guys that write buggy code because they don't understand software, or they are software guys that write buggy code because they don't understand hardware! Personally, I've both a (digital) hardware and software guy. Maybe that means I don't understand anything...
Reply by ●May 27, 20142014-05-27
On 27/05/14 18:39, David Brown wrote:> On 27/05/14 18:30, Don Y wrote: > >> IME, folks who do embedded work are either hardware guys who >> started writing code to prove their hardware works -- and then >> got "drafted" into *doing* the code (I know of a Fortune 500 >> company that had a *technician* writing the code for a large >> embedded project "because he tinkered with software at home"; >> the PHB was a self-confident BASIC programmer so he was *sure* >> he understood these issues... <frown>) but, without a formal >> software education, don't really understand how to *design* >> the software (---> buggy code); >> >> Or, they are software folks who know squat about hardware and, >> as a result, ill-equipped to understand what *can* (and does) >> go wrong and, therefore, write buggy code. >> > > That seems a bit depressing - either they are hardware guys that write buggy code because they don't understand software, or they are software guys that write buggy code because they don't understand > hardware! > > Personally, I've both a (digital) hardware and software guy. Maybe that means I don't understand anything...Or maybe we are minor deities?
Reply by ●May 27, 20142014-05-27
On 27/05/14 20:14, Tom Gardner wrote:> On 27/05/14 18:39, David Brown wrote: >> On 27/05/14 18:30, Don Y wrote: >> >>> IME, folks who do embedded work are either hardware guys who >>> started writing code to prove their hardware works -- and then >>> got "drafted" into *doing* the code (I know of a Fortune 500 >>> company that had a *technician* writing the code for a large >>> embedded project "because he tinkered with software at home"; >>> the PHB was a self-confident BASIC programmer so he was *sure* >>> he understood these issues... <frown>) but, without a formal >>> software education, don't really understand how to *design* >>> the software (---> buggy code); >>> >>> Or, they are software folks who know squat about hardware and, >>> as a result, ill-equipped to understand what *can* (and does) >>> go wrong and, therefore, write buggy code. >>> >> >> That seems a bit depressing - either they are hardware guys that write >> buggy code because they don't understand software, or they are >> software guys that write buggy code because they don't understand >> hardware! >> >> Personally, I've both a (digital) hardware and software guy. Maybe >> that means I don't understand anything... > > Or maybe we are minor deities? >Well, my customers are always expecting miracles!
Reply by ●May 27, 20142014-05-27
Hi Simon, On 5/27/2014 10:15 AM, Simon Clubley wrote:> On 2014-05-27, Don Y<this@is.not.me.com> wrote: >> On 5/27/2014 5:11 AM, Simon Clubley wrote: >>> >>> StarterWare is not a OS; it's a support library for bare metal programming. >> >> Then, presumably, it is pretty "thin"? Should be relatively easy >> to see what it *is* doing and figure out what it *should* be doing? > > With the TI datasheet in your hand, it's very easy to see what is going on.That was the intent of my innuendo. If you're keen on *knowing* these sorts of things, it's not hard to tease approximate answers from published documents.> This is the same example code we were talking about recently which TI had > placed under export control and which I later found on GitHub (_after_ > finding out the MMU answers the hard way. :-))Ah, OK.>>> IIRC, those enhanced Linux speeds involve writing the GPIO lines directly >>> via memory mapped I/O rather than through a driver call for each I/O >>> manipulation. >> >> And, are they from user-land *through* an intermediary? > > I'm not 100% sure because I don't use Linux to directly manipulate GPIO > lines; if using Linux, I tend to use a dedicated frontend MCU to get > the realtime guarantees.Understood.> However, AIUI under Linux you use mmap to map in the GPIO registers and > then manipulate them directly. > >>> PS: As a good natured comment, I wonder if I should start applying for >>> embedded jobs. :-) Sometimes, I think that as a hobbyist I seem to know >>> more about this world than those paid to do it for a living. :-) > > Major oops here. That _should_ say "...more about this world than > *some* *of* those paid to do it for a living." I'm NOT trying to claim > I know more about this stuff than the professional c.a.e regulars > around here. :-)Your meaning was understood. (at least by *me* :> )>> IME, folks who do embedded work are either hardware guys who >> started writing code to prove their hardware works -- and then >> got "drafted" into *doing* the code (I know of a Fortune 500 >> company that had a *technician* writing the code for a large >> embedded project "because he tinkered with software at home"; >> the PHB was a self-confident BASIC programmer so he was *sure* >> he understood these issues...<frown>) but, without a formal >> software education, don't really understand how to *design* >> the software (---> buggy code); >> >> Or, they are software folks who know squat about hardware and, >> as a result, ill-equipped to understand what *can* (and does) >> go wrong and, therefore, write buggy code. > > I came to the embedded world as a software person, but I also design > and build my own circuits (although they are veroboard based :-)) so > I have developed some understanding of the hardware side of things.Then you're different than many (most?). I recall a close friend asking me for some help in specifying an ADC (*board*!) for use in his thesis. I inquired as to what he was trying to do: "I want to attach a knob to the device so you can turn the knob to adjust simulation rates instead of having to type in numbers; it seems like a more intuitive interface. I could just read the ADC and use *that* value to set the speed at which I run the simulation" "Um, why don't you just use a little RC oscillator -- running at some nominally convenient frequency -- that you can use to trigger a LOW PRIORITY interrupt (i.e., if you *miss* the IRQ, you don't really care!) and measure the time between interrupts?" Software mindset (his) vs. hardware mindset (mine). I came from a hardware background (wanting to design processors) but, too late, realized that the CS curriculum (a subset of the EE curriculum) would primarily expose me to *software*. [The *other* EE options wouldn't give me the background I wanted, either! So, I had to aggressively pursue the courses that I wanted to get the background that I felt I needed for *my* goals] As a result, my hardware designs (logic) look a lot like software and my software designs look a lot like *hardware*. And, it also explains why I have so little problems dealing with parallelism, pointers, etc. -- I "see" the hardware involved and think nothing of it. In hindsight, I hit the sweet spot with my education and experience. Most of the folks that I know who do hardware have very boring jobs. And, those who do software end up feeling like the tail being wagged by the dog (being *handed* some hardware and told -- by a hardware guy who "doesn't understand" :> -- that they can make it do what is needed in the product) [I write the code for the hardware I design so I make all the tradeoffs to *my* advantage -- regardless of which hat I'm wearing! :> ]> I'm much stronger on the digital side of things than the analogue/analog > side of things however.Ditto. Though I find myself even shedding much of those activities as projects get more complex -- complexity invariably *only* possible to provide in software. The *application* is where all the fun lies, not the hardware or software!
Reply by ●May 28, 20142014-05-28
On Sunday, May 25, 2014 4:25:48 PM UTC-4, Don Y wrote:> Hi jb, > > > > On 5/25/2014 11:44 AM, hait12icare2011@gmail.com wrote: > > > I've been a SW developer, but one question I've never addressed is: > > > What OS latencies and CPU delays are there in a compiled, running > > > program? Is there any simple way to minimize them? > > > > That, of course, depends on the choice of processor ("CPU delays") > > and the choice/characteristics of the OS you are using (if any). > > > > CPU's often include instruction pipelines, I/D caches, and > > (instruction) scheduling algorithms that can cause what you *think* > > is happening (i.e., by examining the assembly language code that > > is actually executing) to differ from what is *actually* happening > > (i.e., by examining the CPU's *state*, dynamically). > > > > Add a second (or fourth) core and things get even messier! > > > > OS's range from *nothing* (e.g., running your code in a big loop) > > to those with virtual memory subsystems, and dynamic scheduling > > algorithms, preemption, resource reservations, deadline handlers, > > etc. > > > > Of course, if it's *your* hardware (and OS choice), you can opt to > > bypass all of those mechanisms by *carefully* designing your > > "system" to run at the highest hardware priority available. In > > essence, claiming the CPU for your exclusive use. > > > > > I am thinking of a simple c code program that reads data off a pci > > > card and then writes it to memory like a PCIe SSD drive. I understand > > > there will be various hardware latencies and delays in the data > > > input. > > > > Again, that depends on the choice of processor and the actual code > > that gets executed (recall, what you *write* can be rewritten by an > > aggressive compiler so you need to look at what the actual instruction > > stream will be). You can, of course, mix and match your tools to > > the tasks best suited. E.g., if there are timing constraints and > > relationships that must be observed in accessing the PCI card, code > > that in ASM. If the OS already knows how to *talk* to the SSD > > (assuming you are using a supported file system and not just writing > > to the raw device), then just pass the results of the ASM routine > > to a higher level routine that allows the OS to do the actual write. > > > > Of course, you have to be sure your *average* throughput meets the > > needs of the data source. Often, that means an elastic store, > > somewhere, so your ASM routine can *always* be invoked to get the > > next batch of data even if the OS hasn't caught up with the *last* > > batch of data. Make this store easily resizable and then measure > > to see just how much gets consumed (max) in your worst case scenario. > > > > [Hint, if you are using a COTS OS, you probably will never be able > > to get *published* data to allow you to make these computations > > a priori. And, if the OS will support a variety of unconstrained > > *other* applications, all bets are off -- unless you can constrain > > them to suit your requirements!] > > > > > But what if the assembler program is executing? Does the OS "butt in" > > > and context switch/ multi-task during execution of a continuous > > > compiled program? If so, how does one shut that off? > > > > Again, depends on the OS and how you've installed your "program". > > E.g., if you have ensured that your code always runs at highest > > privilege, then the OS waits for *you* (which could bodge other > > applications that are expecting the OS to "be fair"). > > > > If, OTOH, you are just a userland application, then your code > > could "pause" for INDEFINITE periods of time: milliseconds to > > *days* (exaggeration). > > > > All the "writing in ASM" buys you is the ability to see what the > > sequence of opcodes available to the CPU will be. Writing in a > > HLL hides that detail from you (though you can often tell your > > compiler to show it to you) *and* limits your ability to make > > arbitrary changes to that sequence (because the compiler has > > liberties to alter what you've told it -- in "compatible ways"). > > > > > I've read about this somewhere, but never paid attention to it. > > > > Much effort goes into system designs to *free* people from > > having to think about these sorts of details. But, when you > > are dealing with hardware, there are often other constraints that > > force you to work around/through those abstractions. > > > > Typically (i.e., even in a custom OS/MTOS/RTOS) a high(er) priority > > task deals with events that have timeliness constraints. E.g., > > fetching packets off a network interface (if you "miss" one, it > > either is lost forever *or* you have to request/wait for its > > retransmission -- a loss of efficiency... especially if you are > > likely to miss *that* one, too!). > > > > The data acquired (or *delivered* -- when pumping a data sink), is > > then buffered and a lower priority (though this might still be a > > relatively high priority... based on the overall needs of the > > system) task removes data from that buffer and "consumes" it. > > > > Note that this *adds* latency to the overall task. And, allows > > that latency to exhibit a greater degree of variability (based > > on how much of the elastic store gets consumed -- or not -- over > > the course of execution). So, if you expect a close temporal > > relationship between "input" and "output", you have to address > > this with other mechanisms (e.g., if you wanted something to > > happen AS SOON AS -- or, some predictable, constant time > > thereafter -- an input event was detected, the variability in > > this approach is directly reflected in that "output") > > > > Of course, if it can't be consumed as fast as it is sourced, then > > your system is too slow for the task you've set for it! > > > > "Why not just do the output in the same high priority task as the > > input?" > > > > What if the SSD (in your case) is not *ready* for more input at the > > *moment* your new input comes along? Perhaps the SSD is doing > > internal housekeeping? Do you twiddle your thumbs in that HIGH > > PRIORITY task *waiting* for it to be ready? How long can you twiddle > > before your *next* input comes along AND GETS *MISSED*? > > > > OS's (particularly full-fledged RTOS's) can provide varying degrees > > of support to remove some of the details of this task management. > > E.g., it may provide support for shared circular buffers. Or, allow > > buffers to be dynamically m-mapped to recipient tasks (to eliminate > > bcopy()'s). Signaling between the producer and consumer can be > > *part* of the OS (instead of forcing you to spin-wait on a flag). > > Deadline handlers can be created (by you) that the OS can then > > invoke *if* the associated task fails to meet its agreed upon > > deadline (e.g., what happens if you *can't* get back to look at > > the PCI card before the next data arrives? or, if you can't pull > > the data out of the buffer before the buffer *fills*/overflows? > > Do you *break*? Or, do you gracefully recover?) > > > > Best piece of advice: figure out how *not* to have timing constraints > > on your task. And, if unavoidable, figure out best to handle their > > violation: "hard" constraints can be handled easiest -- you simply > > stop working on them once you're "late"! ("Sorry, the ship has already > > sailed!"). "Soft" requires far more thought and effort -- it assumes > > there is still *value* to achieving the goal -- albeit *late*. ("But, > > if you charter a speedboat, you could probably catch up to that ship > > and arrange to board her AT SEA -- or in the next port. Yeah, that's > > a more expensive proposition but that's what happens when you miss > > your deadline!"). > > > > Any more *specific* answer requires far more specifics about your > > execution environment (processor, hardware involved, choice of OS, etc.) > > > > HTH, > > --donThanks, Don. What an exposition! It looks like a buffer approach is going to be best. Is there a preferred way to share memory? Could a SATA SSd be somehow accessed by two cpu's?
Reply by ●May 28, 20142014-05-28
On 28/05/14 07:38, haiticare2011@gmail.com wrote:> On Sunday, May 25, 2014 4:25:48 PM UTC-4, Don Y wrote:>> HTH, >> >> --don > > Thanks, Don. What an exposition! It looks like a buffer approach is going > to be best. Is there a preferred way to share memory? Could a SATA SSd be > somehow accessed by two cpu's? >I think someone already asked you to learn to snip, and I know I have asked you to stop using the abominable google groups - get a newsreader, and get a newserver. Forget the SSD. You haven't a clue what you need or want - but I can guarantee 100% without a doubt that you will never get anything working with SATA or SSD's except as normal disks attached to a PC card running a full OS (such as Linux). No experienced developer would ever consider bare-metal access to an SSD, and the level of ignorance shown by your questions here beggars belief. Please give up on this project. You are not qualified for it. The problem is not that you don't know all the issues - that is normal at the start of a project. Your problems are: 1. You have no idea what the project should do. 2. You have no idea of the real requirements. 3. You do not understand that 1 and 2 are problems. 4. You do not understand any of the hardware involved in possible implementations. 5. You do not understand any of the software involved. 6. You do not understand that 4 and 5 are problems. 7. You are fixed on certain ideas for a solution, though they have no connection with reality. 8. You repeatedly ask the same questions in different ways, and politely thank people for their answers even though you do not read what they write, do not understand the little you read, and do not take advantage of the good advice you get. Tell your boss that you cannot do the project alone - you need help, from someone who is able to plan and organise, who can find the right people to talk to, and get the information needed and get the work done. I am not telling you this to be cruel or unkind - I am telling you this because it will save you and everyone around you considerable pain and wasted time and money if you change direction now, rather than in a year's time when you have still got nowhere. David
Reply by ●May 28, 20142014-05-28
In article <8d5fe974-1555-4103-89c0-2fa6f3ba7deb@googlegroups.com>, haiticare2011@gmail.com says... [...BIG SNIP...]> > Actually, the failure of the ARM community to achieve any serious IO is > embarassingly apparent and does not require any bureaucratic structure to > see it. > The GPIO data rate was coaxed into the mHz range, but with great difficulty. > It is natively in the low kHz range. > General material is offered, which evaporates under scrutiny...Complete nonsense on Raspberry Pi some OLD benchmarks for a system using ARM and GPIO and Linux from userland and lower on this page http://codeandlife.com/2012/07/03/benchmarking-raspberry-pi-gpio-speed/ Some GPIO libraries have attained some high speeds and one has made an application that talks to a GPIO deamon and does a multi-channel logic analyzer wit display from multiple GPIO inputs. See http://abyz.co.uk/rpi/pigpio/index.html and piscope on http://abyz.co.uk/rpi/pigpio/piscope.html So this proves to me you have done NO research or even anything useful you are just asking questions until someone either a) gives an answer agreeing with your 'solution' which is not a solution b) says they will give you the complete works for FREE c) point you at a pre-made solution that matches you changing specification and probably has to be FREE, which obviously needs to have a TPI - Tele-Pathy Interface -- Paul Carpenter | paul@pcserviceselectronics.co.uk <http://www.pcserviceselectronics.co.uk/> PC Services <http://www.pcserviceselectronics.co.uk/pi/> Raspberry Pi Add-ons <http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font <http://www.badweb.org.uk/> For those web sites you hate
Reply by ●May 28, 20142014-05-28
On 5/27/2014 10:38 PM, haiticare2011@gmail.com wrote: [my original lengthy reply elided]> Thanks, Don. What an exposition!Please edit any included text to remove anything not germane to your followup comments. *Not* doing so is rude -- it says "My time is SO much more important than YOURS... I can't be bothered to take the time to trim my response appropriately but *you* (all) should be willing to spend YOUR time scrolling through it LOOKING for any comments I may have chosen to embed within it." When you are counting on the benevolence of others to HELP YOU with your query, it's sort of silly to ANNOY them in the process!> It looks like a buffer approach is going > to be best. Is there a preferred way to share memory?You are only sharing memory between two different parts of the same program (presumably, a single CPU). The issue will be ensuring that those two different pieces of code don't BOTH try to access the same parts of memory concurrently (or nearly so) given that they will *probably* be invoked asynchronously wrt each other (e.g., the producer most likely an ISR). Keywords you may find helpful: mutex, critical region, atomic operation, FIFO management, synchronization, etc. This is *old* technology. You should be able to find examples of code that does exactly this (as it is encountered in damn near every application that talks to the outside world).> Could a SATA SSd be > somehow accessed by two cpu's?You don't need two CPUs -- that just makes things more complicated: then you have to physically share access to a single peripheral along with any state defining each CPU's past/current accesses to it. Do yourself a favor: write a little piece of code that takes keys from the (PC?) keyboard and copies them to a contiguous area in memory (e.g., a "buffer" os some nontrivial size). When the buffer is full, print out its contents. I.e., if the buffer is big enough for 26 characters and you type the keys 'a' through 'z', it should print "abcdefghijklmnopqrstuvwxyz" AFTER you have typed the final 'z' (which "filled" the buffer). Then, terminate the program. *BUT*, as you are copying each keystroke detected, invoke sleep(2) -- or something quivalent -- to ensure the program is "stuck" in the copying act for a noticeable amount of time (e.g., 2 seconds) before being able to check for the next keystroke. In effect, the PROGRAM will only be looking for a keystroke every ~2 seconds. Then, start the program and type the letters of the alphabet, slowly, pausing a *few* seconds between each. Verify the output is as expected once you type the final 'z'. Repeat the exercise but, this time, type one character every ONE *second*. Once you get to 'z', start typing '*' until the program spits out it's result. Repeat this, yet again, typing two characters every second (this should be easy to do, physically). Then, again, at *three* per second... etc. Look at your results. Think about what you *thought* was happening (when you wrote the code). Think about what *must*, instead be happening. Don't GUESS -- invest some of YOUR precious time experimenting.







