> Some things "fast", some things "slow". No real clear requirements. So the
> result is general answers.
>
> I'll contribute mine. I tend to have used systems that are not quite as
> resource starved as Don. So I do prefer to have some Operating System, even
> if it is only a micro-kernel, to handle tasking. And preemptive over
> cooperative scheduling.
Sorry, I didn't mean to imply that (all) my designs are resource starved.
(Current project has more resources than most *companies* :< )
Rather, I was trying to pick up on the OP's comment: "I'm more of a
hardware guy..." and assume he's running as close to metal as he
can (perhaps because he doesn't have anything more sophisticated
in place to build upon?).
Hardware folks tend to have o problems thinking about concurrency.
In a hardware design, you have lots of "mechanisms" running at
the same time; arranging for them *not* to do so would be
counterintuitive!
So, the idea that <something> could be pulling characters out of a UART
while <something else> is stuffing them *in*...
And, yet another thing is acting on received characters to determine what
actions they should initiate -- and those actions happening *while* all
these other things continue in their operation...
The obvious conclusion is an environment that *supports* concurrency
(e.g., multitasking).
But, such an environment (if you don't already have one) is an effort
for most folks unfamiliar with the techniques and issues that arise.
(e.g., priorities, deadlock, preemption, atomic regions, etc.)
By contrast, it is relatively easy to put in place an ad hoc cooperative
environment -- despite the crudeness with which it might be implemented!
Without the concern of possible preemption at any moment, you can build
simple/trivial services -- albeit not elegant or highly performant!
E.g., (apologies for typos; I've got biscotti in the oven so I'm only
partially paying attention...)
#include <system_config.h>
#define EXPIRED (0)
Timing_Service()
{
int timer;
int timers[NUMBER_TIMERS];
init:
for (timer = 0; timer < NUMBER_TIMERS; timer++) {
timers[timer] = EXPIRED;
}
update:
do {
reschedule();
while (FOREVER) {
// wait for a timer IRQ to have been detected (no event support!)
if (timer_IRQ_detected)
break;
reschedule();
}
// timer IRQ detected; update all timers accordingly
for (timer = 0; timer < NUMBER_TIMERS; timer++) {
if (timers[timer] != EXPIRED)
timers[timer]--;
}
} while (HELL_UNFROZEN);
// Not reached
}
[Yeah, it's a kludge, but can be very lightweight without having to
know any of the "tricks" for implementing timing services!]
Now, a task can *use* a timer:
#define BLINKY (3)
#define CONVERT_TO_MILLISECONDS (....)
blinker()
{
int state = OFF;
...
do {
// (re)start BLINKY
timers[BLINKY] = CONVERT_TO_MILLISECONDS(1000);
...
if (timers[BLINKY] != EXPIRED) {
// still have time remaining!
reschedule();
} else {
state = (state == OFF ? ON : OFF);
}
} while (WAITING_FOR_GODOT);
// not reached
}
Again, crude but an effective way for someone to think about
these mechanisms happening in parallel -- like reading ladder
logic!
The only "magic" in all this is reschedule()!
> A simple main loop can mimic a kernel but requires careful programming of
> the "application" layer. I've never been a fan of that model.
Agreed. But, with the above approach, you can get a structure *like*
a "main loop" yet keep the contents of each "rung" (borrowing from
ladder logic terminology) independant and unconcerned with what's
happening *in* the "other" rungs:
main()
{
...
initialize();
...
do {
Timing_Service();
blinker();
Tx_Handler();
Rx_Handler();
Command_Parser();
Nose_Picker();
Butt_Scratcher();
} while (UNEATEN_RHUBARB);
// not reached
}
Keeping in mind that each of these "tasks" can be written without
awareness of the other tasks' existence/needs -- except for the
explicit dependencies that are embodied in their coding! E.g., blinker()
contains no code that assists or ensures Rx_Handler get's *its* job
done (which a crude "main loop" would have had to do!)
[Of course, ALL tasks have to remember to be cooperative and
not gluttonous with the CPU as a resource. OTOH, if a task
*needs* to be a glutton, it *can* -- without having to coerce
an OS into *letting* it do so!]
[[There are hacks that you can employ "for free" to reappropriate
resources after-the-fact if you decide some particular tasks deserve
"more" CPU than they are otherwise getting. But, that doesn't
require going through tasks and tweeking arbitrary priorities, etc.]]
> You have something that works, so stay with that model, UNLESS the
> requirements are changing.
>
> The key thing you should take from Don's comments is design in LAYERS. It is
> an excellent approach.
Reply by Ed Prochak●August 11, 20152015-08-11
Some things "fast", some things "slow". No real clear requirements.
So the result is general answers.
I'll contribute mine. I tend to have used systems that are not quite as
resource starved as Don. So I do prefer to have some Operating System,
even if it is only a micro-kernel, to handle tasking. And preemptive
over cooperative scheduling.
A simple main loop can mimic a kernel but requires careful programming of the "application" layer. I've never been a fan of that model.
You have something that works, so stay with that model, UNLESS the requirements are changing.
The key thing you should take from Don's comments is design in LAYERS.
It is an excellent approach.
Ed
Reply by Don Y●August 9, 20152015-08-09
On 8/9/2015 3:31 PM, jmariano wrote:
> Hi,
>
> I'm more of a hardware guy so I'm not sure if I'm using the correct
> terms.....
>
> I have a hardware system build around a microcontroller. The mC is
> programmed in C, no OS. The system accepts commands from interrupted serial
> port and does "things"... Some of the things it does are fast, relative to
> the rate at witch the command are coming (the task can be executed between 2
> command), other thing are very slow.
>
> I have the system working, with a parser decoding the commands, etc, but it
> was implemented brute-force. What I would appreciate is some suggestions of
> books or other references on more elegant ways of modelling this type of
> applications. I would expect something like "this system can be modelled as
> two FSM communicating through a circular lists etc.."
*If* that is what it is doing, then that would be a good model! :>
For "resource starved" environments (or, environments where you don't
want to develop any "system services" -- e.g., OS -- to make your life
a bit more structured), I usually implement a very primitive cooperative
NONPREEMPTIVE multitasking executive; barely more than a framework to
save the current context and restore the "next" context (i.e., not
even providing "priorities" on those competing/cooperating tasks).
Let ISR's do *just* the low latency requirements and pass data to/from
"handlers" (tasks) that embelish the data as appropriate. The application
"services" sit atop these.
So, in your case, an ISR that pulls characters from a UART; another
that pushes characters into a UART (iff XMIT data rate is important!).
Rx and Tx *handlers* that pull/push characters from the *small*
FIFO's maintained by the ISR's (sized to be just large enough to
not overflow between respective handler invocations) as well as
implementing any line disciplines. E.g., the Tx handler is
responsible to START the Tx ISR if the Tx ISR has turned itself
off (because it ran out of characters to transmit!).
An "application layer" task that sits atop the Rx handler and
extracts received characters from *its* FIFO (different from
the ISR's FIFO). These characters are then parsed to recognize
"commands".
Each recognized command signals an appropriate "job" to implement
the requested action. That job can then go about its business
without having to "remember" to check any of the myriad "other
things" that are happening concurrently -- they are not its
responsibility!
[I tend to like to implement finite automata to process "events"
and associated "reactions"; but, that's just because I write
software much like I design hardware -- and vice versa]
Meanwhile, other tasks may be independently running to implement
a timing service, refresh displays, scan keyboards, etc.
But, in this way, you gain the advantage of data hiding, task
independence, etc. without having to invest in a full-fledged
OS. (Of course, this happens because YOU bear a lot more of the
responsibility for making sure it "works" in all cases!! You can't
rely on the OS to do that for you!)
Reply by Tim Wescott●August 9, 20152015-08-09
On Sun, 09 Aug 2015 15:31:01 -0700, jmariano wrote:
> Hi,
>
> I'm more of a hardware guy so I'm not sure if I'm using the correct
> terms....
>
> I have a hardware system build around a microcontroller. The mC is
> programmed in C, no OS. The system accepts commands from interrupted
> serial port and does "things"... Some of the things it does are fast,
> relative to the rate at witch the command are coming (the task can be
> executed between 2 command), other thing are very slow.
>
> I have the system working, with a parser decoding the commands, etc, but
> it was implemented brute-force. What I would appreciate is some
> suggestions of books or other references on more elegant ways of
> modelling this type of applications. I would expect something like "this
> system can be modelled as two FSM communicating through a circular lists
> etc.."
What is the goal of your modeling? If what you need is an absolutely
complete and accurate model in all respects, you have one -- the system
itself.
You model something by throwing away all the information about it that
confuses the issue, while keeping the information you need to solve the
problem at hand. Without knowing what that problem is, we can't help.
--
Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
Reply by jmariano●August 9, 20152015-08-09
Hi,
I'm more of a hardware guy so I'm not sure if I'm using the correct terms....
I have a hardware system build around a microcontroller. The mC is programmed in C, no OS. The system accepts commands from interrupted serial port and does "things"... Some of the things it does are fast, relative to the rate at witch the command are coming (the task can be executed between 2 command), other thing are very slow.
I have the system working, with a parser decoding the commands, etc, but it was implemented brute-force. What I would appreciate is some suggestions of books or other references on more elegant ways of modelling this type of applications. I would expect something like "this system can be modelled as two FSM communicating through a circular lists etc.."
Regards
mariano