EmbeddedRelated.com
Forums

Common name for a "Task Loop"

Started by Tim Wescott June 24, 2016
On 6/28/2016 3:30 PM, Rob Gaddi wrote:
> Don Y wrote: > >> On 6/28/2016 2:09 PM, Rob Gaddi wrote: >>>> Said another way, is "state" the equivalent of (and only means of >>>> representing) the "virtual program counter" for 'A'? >>> >>> Yup. Each invocation of A runs through to completion without blocking; >>> if it's waiting for I/O then you're in >>> >>> case SPI_WAIT: >>> if (!spi_finished) break; >> >> So, if you were bit-banging the SPI interface, you'd have to >> keep the 'substate' (of "SPI_WAIT") in a "spi_clock_counter" >> (so you could issue *a* clock then relinquish the processor for >> "yet another big loop iteration" while remaining in the SPI_WAIT >> state *or* issue the individual clocks "in-line" and ensure they >> actually make it onto the I/O pins sequentially with the >> required time between them (determined by the device). > > Although on a small enough processor that I wouldn't consider even > cooperative OS, you generally bit-bang SPI by flipping the port pins as > fast as you can and finding that it's just not all that fast. No reason > to multitask in the middle of it.
Pick something that isn't "on the same timescale" as an opcode fetch (e.g., debouncing keys on a keypad).
>> [Of course, this doesn't make sense for an SPI interface; OTOH, it can >> be helpful for implementing "STUCK KEY" timers for keypads ALONGSIDE >> the "DEBOUNCE" timer! In the real world, this seems to happen >> quite a lot: starting a motor and waiting for it to hit a limit >> switch -- yet KNOWING that it can't run "forever" unless the switch >> or the mechanism are broken!] > > And at that point you've got a non-preemptive OS where you explicitly > yield the processor in each task. It's fine as far as it goes, but > you've just introduced all the issues of managing multiple stacks, > having a scheduler that keeps track of "tasks", etc. If you've got the > RAM for it, great, but I wouldn't want to try it on 1KB.
But you don't have to "manage multiple stacks", "have a scheduler", etc.! Those things rely heavily on the choice of implementation language that you choose for your product. I.e., if your product is small, then the gains of a HLL are relatively insignificant AND any "hidden costs" are likely to catch you unprepared. [The LORAN plotter mentioned elsewhere this thread was coded in ASM as HLL's simply weren't available in the i4004/i8080 days] Apologies for the all caps but the tools available 40 years ago weren't very tolerant! :-/ Note this is NOT running in an ISR Why should it? if we can guarantee from the design of the system (i.e., how quickly you can get around the "loop") that it will be revisited OFTEN/frequently, why waste a foreground resource (ISR) on something as silly as scanning and debouncing keys?? BigLoop: Call TMHNDL ; system timers service ... Call KYHNDL ; service keypad ... ret Main: Call BigLoop ; run the machine Call Otherstuff ; support diagnostics, performance counters, etc. Jump Main KYHNDL: YIELD ;BEGIN KEYPAD IDLE STATE CALL KYPDCK ;CHECK IF ANY KEY CLOSURES RET Z ;NO KEY CLOSURES HAVE BEEN DETECTED LD E,A ;COLUMN DATA OF ROW DETECTED AS FIRST KEY CLOSURE DEC A ;SET BITS TO THE RIGHT OF THE RIGHTMOST COLUMN DETECTED AND E ;ANY COLUMNS DETECTED TO THE LEFT OF RIGHTMOST COLUMN? JR NZ,TWOKEY ;TWO KEY CLOSURES DETECTED IN THE SAME ROW DEC B ;SET Z FLAG IFF FIRST KEY WAS DETECTED IN LAST ROW! LD D,B ;ROW NUMBER OF FIRST CLOSURE [0, NMKYRW-1] CALL NZ,KYPDLP ;CHECK IF CLOSURE IN ANOTHER ROW IF NOT ALREADY IN LAST JR NZ,TWOKEY ;TWO CLOSURES DETECTED IN DIFFERENT ROWS LD A,D ;RETRIEVE ROW NUMBER OF ONLY CLOSURE [0, NMKYRW-1] LD B,NMKYRW ;NUMBER OF ROWS IN KEYPAD SUB B ;R.A = ROW NUMBER OF DETECTED KEY - NMKYRW ONKYLP: ADD A,B ;ACCUMULATE KEYNUMBER AS (COL*NMKYRW+ROW) RR E ;ADVANCE TO NEXT COLUMN TO THE LEFT JR NC,ONKYLP ;NOT THIS COLUMN, TRY AGAIN. INC A ;FORM KEYNUMBER AS (COL*NMKYRW+ROW)+1 [1,NMKYCL*NMKYRW] CALL PUTKEY ;SEND CHARACTER TO KEYPAD BUFFER ;keypad buffer ultimately feeds anything that is interested in "keystrokes" ;so, keystrokes can be buffered to accommodate variations in how quickly ;PREVIOUS keystrokes are processed/consumed TWOKEY: LOAD STUCK_KEY_TIMER 15 SEC ;SETUP STUCK KEY TIMER AND YIELD CALL KYPDCK ;CHECK IF ANY KEY STILL DOWN JR Z,NOTKEY ;ALL KEYS RELEASED WAIT STUCK_KEY_TIMER ;A KEY IS DOWN, WAIT ON STUCK TIMER LD A,STKKEY ;FLAG INDICATING KEY STUCK CALL PUTKEY ;SIGNAL STUCK KEY ERROR JR TWOKEY ;WAIT FOR STUCK KEY CONDITION TO GO AWAY NOTKEY: LOAD DEBOUNCE_TIMER 50 MSEC ;SETUP DEBOUNCE TIMER AND YIELD CALL KYPDCK ;CHECK IF STILL NO KEY CLOSURE JR NZ,TWOKEY ;KEY HAS BOUNCED CLOSED WAIT DEBOUNCE_TIMER ;NO CLOSURE WAIT ON TIMER JR KYHNDL ;TIMER HAS TIMED OUT INDICATING KEYPAD NOW IDLE [the keypad hardware exists in memory at the address KEYBRD -- which is of the form YYxx (hex). The xx are individual bits, one per row, that drive the row selects of the keypad; reading YY01 would read any key closures in row 0 of the keypad, YY02 for row 1, YY04 for row 2, YYFF for ANY row -- by enabling all rows simultaneously!] KYPDCK: LD HL,KEYBRD+ALLROW ;POINTER TO ALL ROWS SELECT LD BC,NMKYRW*256+ALLCOL ;NUMBER OF ROWS AND MASK LD A,(HL) ;CHECK IF ANY KEY CLOSURES IN ANY ROW AND C ;IGNORE UNUSED COLUMNS OF KEYPAD RET Z ;NO KEY CLOSURES DETECTED ANYWHERE INC L ;FORCE A SINGLE BIT FOR A SINGLE ROW TO BE SELECTED KYPDLP: RR L ;SHIFT SELECT BIT TO GET THE NEXT ROW SELECTED LD A,(HL) ;READ THE COLUMN DATA FOR THIS ROW AND C ;IGNORE UNUSED COLUMNS OF KEYPAD RET NZ ;A KEY CLOSURE HAS BEEN DETECTED IN THIS SELECTED ROW DJNZ KYPDLP ;NOT DONE SCANNING ALL ROWS RET ;ALL ROWS SCANNED BUT I COULD HAVE SWORN I SAW A KEY Note that there is no need to save any "state" in any of these algorithms beyond the state that is embodied in the Program Counter and the timers (which are managed by the TMHNDL) [I'd much rather have these capabilities/services than a HLL when coding small (~16KB ROM) projects]
On 29.6.2016 г. 01:20, Hans-Bernhard Bröker wrote:
> Am 29.06.2016 um 00:00 schrieb Dimiter_Popoff: >..... >> but having a true - preemptive allowing cooperative operation- scheduler >> costs very little in both machine resources and effort to put together >> so I see no point in trying to avoid it. > > Well, it does increase resource usage in at least one aspect, which can > quickly become significant: memory, to store the state of currently > preempted / waiting tasks.
Yes of course, but things are not that bad. I have done it some 25 years ago on a HC11 with 512 bytes of RAM.... Of course there can be a situation where the stacks of the 5-6 tasks are exactly what the project needs but well, this is a bit extreme. I think the registers cost 11 bytes to stack (many years since), then providing some stack to each task etc., well, I'd say 20% of the 512 bytes go on multitasking. I am not sure the loop approach will use that much less to be worth it really.
> And because you'll have to swap between task > states, including call stack and CPU register contents, that usually > means the scheduler itself can't be written in the high-level language > of choice.
This is of course true, I just have never had the problem. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 6/28/2016 4:06 PM, Don Y wrote:

> TWOKEY: LOAD STUCK_KEY_TIMER 15 SEC ;SETUP STUCK KEY TIMER AND YIELD > > CALL KYPDCK ;CHECK IF ANY KEY STILL DOWN > JR Z,NOTKEY ;ALL KEYS RELEASED > > WAIT STUCK_KEY_TIMER ;A KEY IS DOWN, WAIT ON STUCK TIMER
It will only be obvious to someone carefully inspecting the code (unfamiliar with the primitives being used) that the LOAD has redefined the entry point for this task to be immediately AFTER the LOAD *and* has yielded the CPU to the next "task" in the "loop". I.e., the next time around the loop, CALL KYPDCK will execute in this task's "timeslot". If the key (ANY key!) is still depressed, then the WAIT will execute. When the key is released, then we debounce the trailing edge of the key closure (i.e., require the keypad to remain "idle" for a certain amount of time before we actually return to the "IDLE" state). [Note that this code is shared between multiple key closures and a *single* key closure -- the difference being whether the single key closure was signaled to the rest of the system, while the multiple key closure wouldn't be!] If the STUCK_KEY_TIMER has timed out, then wait will be a NOOP. Otherwise, it will return to the "big loop". As such, WHILE the key is (stuck) depressed, this task will just keep executing these three statements (CALL, JR, WAIT). If the key remains stuck too long, then the code advances to signal the "key stuck" event (let something else decide what that means)
> LD A,STKKEY ;FLAG INDICATING KEY STUCK > CALL PUTKEY ;SIGNAL STUCK KEY ERROR > > JR TWOKEY ;WAIT FOR STUCK KEY CONDITION TO GO AWAY > > NOTKEY: LOAD DEBOUNCE_TIMER 50 MSEC ;SETUP DEBOUNCE TIMER AND YIELD
I.e., there's a lot of logic in this tiny state machine and no "state" variable to examine. The (real) "program counter" encodes the state of the FSM.
On 29.6.2016 �. 01:21, Rob Gaddi wrote:
> Dimiter_Popoff wrote: > >> On 28.6.2016 �. 17:40, Tim Wescott wrote: >>> On Tue, 28 Jun 2016 06:20:43 +0300, Dimiter_Popoff wrote: >>> >>>> On 28.6.2016 �. 05:54, Simon Clubley wrote: >>>>> >>>> > ..... >>>>> I still think that's a polling loop because you are reaching out to the >>>>> sensor and asking it for it's current value. >>>> >>>> "Polling" in programming means "as opposed to interrupt handling". >>>> How is this opposed to "interrupt handling". >>>> >>>> What you suggest to be polling is simply a loop of calls to subroutines. >>>> A "call loop" is what describes it - although it does not matter a lot >>>> what word is used as long as it is not an already widely accepted one >>>> like "polling" which you want to redefine. Nothing wrong with that of >>>> course - as long as you don't have to communicate with other people >>>> using your redefined term. >>>> >>>> So much talk about so little :-). Although Tim's topic idea worked, >>>> produced quite a discussion. >>> >>> Well, I was hoping for a name of a design pattern, and I'm still not >>> happy about my choices. So from that perspective it's a wash. >>> >>> "Super loop" seems closest, but it seems to more capture the notion of >>> _always_ executing A, B, C, rather than executing only those bits that >>> are ready. >>> >> >> I encounter the "super loop" term for the first time here, but why not. >> Although I see nothing "super" about it, to me this is still an endless >> loop of calls to subroutines either of which may opt to do something >> or to just return if it has nothing to do. >> But like I said earlier I have almost never used this approach, it >> smells of oversimplifying - thus costing more than a decent scheduler >> does. Implement loops, state machines etc. as you like within tasks >> but having a true - preemptive allowing cooperative operation- scheduler >> costs very little in both machine resources and effort to put together >> so I see no point in trying to avoid it. >> >> Dimiter >> >> ------------------------------------------------------ >> Dimiter Popoff, TGI http://www.tgi-sci.com >> ------------------------------------------------------ >> http://www.flickr.com/photos/didi_tgi/ > > I think you're vastly underestimating the advantages of non-preemption. > You get rid entirely of the need for mutexes and resource locking; all > your accesses become atomic because you can't be interrupted.
Well may be but my way of thinking has been incorporating all the multitask/interrupt implications for decades and frankly it was so at the start, this is how my mind works. Then I would say I am pretty practical about preemption, it does not have to be done in any state. Say if the machine is in supervisor state it will not be preempted by the "force out" interrupt brutally, it will exit just as it returns from the current system call it is in (being in supervisor state). Then preemption does not really have to occur a lot if at all, it is just a plan B most if not all of the time. I have a "hog" command under DPS which tests that, does a "bra *" :). Starting multiple instances at different priorities allows some testing but I very rarely use it.
> Now, once you start adding interrupts in it can get hairier, because > you've just reintroduced preemption. But for projects where you have > serious constraints on code, RAM, or power, something that doesn't need > a scheduler is a real win.
Yes, I just don't see a lot of cases where interrupts won't be needed. Then if the programming effort is so little so it can be done without them of course, I just would not put much thought about it and do it without my schedulers :). But I have not had anything like that for decades... Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 29.6.2016 г. 01:39, Tim Wescott wrote:
> On Wed, 29 Jun 2016 01:00:02 +0300, Dimiter_Popoff wrote: > >> On 28.6.2016 г. 17:40, Tim Wescott wrote: >>> On Tue, 28 Jun 2016 06:20:43 +0300, Dimiter_Popoff wrote: >>> >>>> On 28.6.2016 г. 05:54, Simon Clubley wrote: >>>>> >>>> > ..... >>>>> I still think that's a polling loop because you are reaching out to >>>>> the sensor and asking it for it's current value. >>>> >>>> "Polling" in programming means "as opposed to interrupt handling". >>>> How is this opposed to "interrupt handling". >>>> >>>> What you suggest to be polling is simply a loop of calls to >>>> subroutines. >>>> A "call loop" is what describes it - although it does not matter a lot >>>> what word is used as long as it is not an already widely accepted one >>>> like "polling" which you want to redefine. Nothing wrong with that of >>>> course - as long as you don't have to communicate with other people >>>> using your redefined term. >>>> >>>> So much talk about so little :-). Although Tim's topic idea worked, >>>> produced quite a discussion. >>> >>> Well, I was hoping for a name of a design pattern, and I'm still not >>> happy about my choices. So from that perspective it's a wash. >>> >>> "Super loop" seems closest, but it seems to more capture the notion of >>> _always_ executing A, B, C, rather than executing only those bits that >>> are ready. >>> >>> >> I encounter the "super loop" term for the first time here, but why not. >> Although I see nothing "super" about it, to me this is still an endless >> loop of calls to subroutines either of which may opt to do something or >> to just return if it has nothing to do. >> But like I said earlier I have almost never used this approach, it >> smells of oversimplifying - thus costing more than a decent scheduler >> does. Implement loops, state machines etc. as you like within tasks but >> having a true - preemptive allowing cooperative operation- scheduler >> costs very little in both machine resources and effort to put together >> so I see no point in trying to avoid it. > > The "scheduler" overhead is about 10x lower with this style of > switching. If you've got a bunch of similar-run-length tasks this method > works dandy.
Well it is perhaps 10x in terms of processing time but that makes it within 1% of the time instead of within 0.1%. I mentioned earlier I once (some 10years ago) I did a thing using the "loop" approach and in hindsight I can say it cost me more time (I had plenty of CPU resources to spare so other considerations just do not apply).
> > Of course, if you've got a bunch of fast tasks and even one slow one > (reciting the Gettysburg address to a human, for instance) then a > preemptive scheduler gets very attractive. >
I see the preemptive feature mostly as a backup plan, basically if the reciter is say reciting through an UART at 9600 bps the output driver would be exiting the task cooperatively long before being preempted (say after filling an output FIFO of some 256 bytes or something). But if the system is dynamic - e.g. a fullblown OS with a user in front of it doing this and that - then preemption may well have to kick in. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
Am 29.06.2016 um 00:30 schrieb Rob Gaddi:
> Don Y wrote:
> Although on a small enough processor that I wouldn't consider even > cooperative OS, you generally bit-bang SPI by flipping the port pins as > fast as you can and finding that it's just not all that fast. No reason > to multitask in the middle of it.
Easy for you to say ;-) The interesting part begins when bit-banging an entire SPI telegram takes as long (or longer...) than the minimum response time you have to guarantee on some other front. Likewise for busy-waiting for an ADC's conversion to finish, or bit-banging other serial protocols. E.g. reasonably short I2C telegrams can often still be bit-banged uninterrupted, with few or no NOPs to get the speed down. But try that with Dallas' 1-wire-bus and other parts of the system will quite probably be very unhappy. It's all nice and easy as long as the system's time slot lengths allow you to finish an entire operation in one go. But if they don't, and you still want to stay with the "no preemption" scheme of doing things, complications tend to start piling up like Jenga sticks --- and the resulting construct is often about as stable as a Jenga tower in a nearly finished game. Sooner or later there will be so many "have to check for event X again!" or "time for yet another Z to go out" kinds of calls creeping up all over the source code supposedly handling job Y, and vice versa, that it all becomes a total mess. Hardware units for SPI, I2C and such usually exist, and they offer event-driven notification, sometimes even FIFOs or DMA, for a reason. :-)
On 29.6.2016 г. 01:43, Don Y wrote:
> hi Dimiter, > > On 6/28/2016 3:00 PM, Dimiter_Popoff wrote: >> I encounter the "super loop" term for the first time here, but why not. >> Although I see nothing "super" about it, to me this is still an endless >> loop of calls to subroutines either of which may opt to do something >> or to just return if it has nothing to do. >> But like I said earlier I have almost never used this approach, it >> smells of oversimplifying - thus costing more than a decent scheduler >> does. Implement loops, state machines etc. as you like within tasks >> but having a true - preemptive allowing cooperative operation- scheduler >> costs very little in both machine resources and effort to put together >> so I see no point in trying to avoid it. > > I don't think you're understanding the value for "lean" approaches > when you are operating in resource starved implementations. > > Imagine having a few HUNDRED bytes of RAM, *total* in your system. > How much of this do you "divert" to implementing a formal scheduler? > How much do you devote to preserving the state(s) of independant tasks?
Hi Don, I think I addressed this in my reply to Hans a short time ago. My estimate was that for 5-6 tasks at 512 bytes of RAM you'd spend about 20% of it on multitasking. If you opt for the superloop approach you will still need some stack etc., let us say 5%. Which means you will have spent about 80 bytes on having a good scheduler - which would likely be a good deal in most cases (and of course I can see there can be extreme cases where one just does not have the 80 bytes).
> > One of my earliest products was a LORAN-C position plotter. It > received LORAN coordinates (time-difference pairs) from an external > LORAN (radio) receiver. These are ~6 (decimal) digit values that > represent the differences, in time, between radio waves being > received from three geographically-fixed transmitters (a master > and a pair of slaves; the slaves synchronized to the master) > > <https://en.wikipedia.org/wiki/LORAN#Operation> not a very > good explanation -- but close enough. > <https://en.wikipedia.org/wiki/Loran-C#Principle> > > As they are *differences*, they present a hyperbolic coordinate > system (back to Conic Sections 101 :> ). Based on knowledge > of where these transmitters are located on the globe (latitude > and longitude!), you project the hyperbolic coordinate system > onto the spherical coordinate system of lat-lon (as used in > navigation). Then, correct for the fact that the Earth is > an OBLATE sphere (not really *round*). Finally, map these > onto a mercator map projection (the precisely scaled sheet of > paper -- aka MAP -- in your plotter on which your pen will be > drawing!) and drive the pen to the "current position" from its > previous position. > > While the displays, keypad, stepping motor interfaces, etc. > can all use nice little integers, all of this navigational > math has to be done using floats. Each "coordinate" > (whether in the hyperbolic, spherical/oblate, mercator > coordinate system) is thus a PAIR of floats. > > With 256 bytes of RAM to play with (remember the pushdown stack > and EVERY RAM consumer), you really don't have spare *bits*, > let alone *bytes*, to devote to any sort of formal scheduling > framework.
Well this is a good example of an extreme case where it is worth going to extra length to save each byte of RAM, of course.
> .... > > I think your current product offerings are REALLY "flush" with > resources, by comparison. E.g., you (and I) wouldn't think twice > about using an "int" (32b!) as a "bool". By contrast, the plotter > mentioned above packed *8* bools into a byte -- and, at some > times, might use that very same byte as a *counter*, etc. depending > on what was happening in the product at that time! >
We all have plenty of resources nowadays indeed, which is why I think a scheduler is the normal way to go. But I do not use a longword as "bool", almost never. Typically I call it "flag" and define bits into it... then often I (have to) use atomic accesses to it etc. :-). Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 6/28/2016 4:40 PM, Dimiter_Popoff wrote:
> We all have plenty of resources nowadays indeed, which is why > I think a scheduler is the normal way to go. But I do not use a longword
I don't think that is universally the case. I think there still exist "penny pinching" applications. OTOH, I think many of those could, for the same effective cost, move up to more featureful implementations if not for "design inertia" I've played at the shallow end of the pool for a long time. Now I'm enjoying "the deep end" -- and the VERY deep end! There are entirely different issues present that you'd never get exposed to if you kept playing in the "wading pool" :-/
> as "bool", almost never. Typically I call it "flag" and define > bits into it... then often I (have to) use atomic accesses to it > etc. :-).
On Tue, 28 Jun 2016 15:43:03 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:


>Imagine having a few HUNDRED bytes of RAM, *total* in your system. >How much of this do you "divert" to implementing a formal scheduler? >How much do you devote to preserving the state(s) of independant tasks?
With a stack oriented HLL like Pascal r C, the variables are already in the (private) stacks, only the CPU registers needs to be saved, thus only a few extra bytes needs to be allocated in each local stack in addition to the application variables.
On 6/28/2016 10:04 PM, upsidedown@downunder.com wrote:
> On Tue, 28 Jun 2016 15:43:03 -0700, Don Y > <blockedofcourse@foo.invalid> wrote: > >> Imagine having a few HUNDRED bytes of RAM, *total* in your system. >> How much of this do you "divert" to implementing a formal scheduler? >> How much do you devote to preserving the state(s) of independant tasks? > > With a stack oriented HLL like Pascal r C, the variables are already > in the (private) stacks, only the CPU registers needs to be saved, > thus only a few extra bytes needs to be allocated in each local stack > in addition to the application variables.
Ah, but that's the problem! You now need stacks for each task (each large enough to satisfy the task's worst case stack penetration *plus* any stack required by ISRs -- as they can occur during any task's "time slot"). You can save JUST the SP for each task and use that to "recover" the stack and CPU registers (saved on the stack during the context switch). But, you will alwyas be tempted to yield() (or equivalent) at some arbitrary depth on the stack. I.e., you can have a dozen stack frames nested FOR THIS TASK and decide to yield() -- now, all of those stack frames must be preserved as part of the task's state (because you picked a "less than ideal place" to yield the processor to the "next" task) If, instead, you impose a discipline on when you can yield, then you can eliminate the need to preserve any of this "stacked state". E.g., only "yield" when the SP is at the level it is at between calls out from the big loop. Then, all you need to do is save the processor state (registers plus PC; the SP will be *known* to be "fixed" in all such invocations with NOTHING on the stack!) If you take a step further and allow yield() to be invoked ANY TIME YOU CARE NOTHING ABOUT THE REGISTER CONTENTS, then you don't even have to save those! That's the premise of the MTX I described in my other posts (and the reason it can function with just 2 bytes per task -- the PC). If you're working in tiny environments, you probably are willing -- even EAGER -- to shed the sorts of overhead that you typically draw into the mix with HLL's; yet, you sacrifice relatively little in terms of capability (cuz you're not building giant applications with negligible resources!)