Hi Niklas. Are you doing a lot of multicore stuff? I haven't
had the pleasure yet, and that might be why we're missing each other.
Multicore is certainly different. The systems I've worked on
also used a minimal number of threads - usually haveing an additional
thread meant we had a different interrupting device to manage.
I certainly appreciate your very well presented thoughts. "Highly
granular" processing has been something of a deep assumption for
a long time, and those are always good to challenge.
And it could be that I was simply corrupted by FPGA designers :)
Niklas Holsti wrote:
> On 12-02-28 15:06 , Les Cargill wrote:
>> Niklas Holsti wrote:
>> <snip>
>>>
>>> Les:
>>>
>>>> Ah! I see our disconnect.
>>>>
>>>> I am referring to preemptive multitasking vs. "cooperative"
>>>> multitasking.
>>>>
>>>> Preemptive simply reruns the ready queue when the system clock
>>>> timer ticks. Whoever is running when the system clock ticks gets put
>>>> back on the ready queue and waits.
>>>
>>> That describes preemptive time-sliced round-robin scheduling without
>>> priorities. I believe that tends to be used in soft real-time systems,
>>> not so much in hard real-time systems.
>>>
>>
>> No; the queue can be a priority queue.
>
> You said that "whoever is running ... gets put back in the ready queue
> and waits", which is not priority scheduling.
>
I should have left off "and waits".
> In a priority-driven system, there is no need to mess with the ready
> queue at every clock tick, only when event makes some waiting task ready
> to execute. (The better systems don't even waste time on handling
> periodic clock ticks, but program a HW timer to interrupt when the next
> timed event comes up, whenever that is.)
>
Sure! I'm mainly thinking of how things are described
in most academic literature in order to be as general as possible.
Lots of ways to skin that cat.
>> Soft vs. hard realtime is a rhetorical swamp ":)
>
> The distinction is fuzzy, but real.
>
Somewhat.
>>>> Cooperative does not. It is the responsibility of each thread of
>>>> execution to be circumspect in its use of the CPU.
>>>
>>> Which adds to the design constraints and makes the design of each thread
>>> more complex. Why should the code of thread A have to change, just
>>> because the period or deadline of thread B has changed?
>>>
>>
>> Erm.... it doesn't. That's rather the point....
>
> We do not understand each other.
>
I think that is true. I'm not sure what to do about that, either :)
> You say that each thread has to be "circumspect" in CPU usage. That is
> rather vague.
It has to get in, do a small task, then get out at each point
in its state. "Circumspect" means "parsimonious" or "cheap" in
this case - it must use the least CPU necessary to execute that state
transition, and get back to a blocking call as quickly as it can.
> If the system has real-time deadlines, but is not
> preemptive, it can work only if "circumspect" means that the thread
> execution times (between reschedulings) are smaller than the smallest
> required response time. Do you agree with this?
Not universally; no. One realtime deadline may require many
time quanta - a given thread may execute may times within a single
deadline time period.
> (In reality the times
> must often be a lot smaller, if incoming events are sporadic without a
> fixed phasing.)
>
> This means that the smallest required response time constrains the
> design of all threads, and therefore a reduction in the smallest
> required response time can force changes in the code of all threads, in
> a non-preemptive system. Do you agree?
>
I think you are assuming a one-to-one map between *all* responses and
that time quantum. So no. A single response may require multiple
time quanta, still.
>>>>> If you are lucky enough to have a system in which the events, periods,
>>>>> deadlines, and processing algorithms are such that you can process any
>>>>> event to completion, before reacting to other events, and still meet
>>>>> all
>>>>> deadlines, you don't need preemption. In any other case, avoiding
>>>>> preemption is asking for trouble, IMO still.
>>>>>
>>>>
>>>>
>>>> yes, I very much prefer run-to-completion for any kind of processing,
>>>> but especially for realtime.
>>>>
>>>> In thirty years, I've never seen a case where run to completion was
>>>> more difficult than other paradigms. That does not mean
>>>> other events were locked out; it simply means that the data for them
>>>> was
>>>> queued.
>>>
>>> Ok, you have been lucky. In more heavily stressed real-time systems,
>>
>> What is even odder is: heavily stressed systems I've seen were the ones
>> *mainly* that *used* run to completion.
>
> Makes me suspect that they were not well designed, or were soft
> real-time systems.
>
They varied. I don't know of a good working distinction between soft
and hard realtime, so I can't speak to the last thing.
>> You'd allow some events, less
>> important ones, to be dropped. Or go to a task loop architecture. In
>> either case, having good instrumentation to count dropped events is
>> important.
>
> Sounds more and more like soft real-time. If a hard-real-time system
> drops events, it is entering its abnormal, fault-tolerance mode. But
> dropping events can be normal for a soft-real-time system.
>
It's possible that what I mean lines up with that. Only a few had
enforced time budgets. The reason I brought that up was that
the failure modes were gentler - you got slow degradation of response
rather than falling off the cliff.
That, of course, depends on what's desired of the system to start with...
>>> run-to-completion is a strait-jacket that forces the designer to chop
>>> large jobs into artificial, small ones, until the small ones can be said
>>> to "run to completion", although they really are just small steps in a
>>> larger job.
>>>
>>
>> Possibly. Although the approach makes it possible to control how the
>> overall system fails. That's mainly what's good about it.
>
> Here I can agree: when you have split the large jobs into several small
> pieces, and use some kind of scheduler to dispatch the pieces, it is
> easy to add some code that gets executed between pieces and can
> reorganize the sequence of pieces, for example aborting some long job
> after is current piece.
>
That's the general idea. The overall idea is really to make a "loop"
into a series (or cycle) of state transitions rather than use control
constructs.
> If you need to abort long jobs that have not been split into small
> pieces (because the system is preemptive), you either have to poll an
> "abort" flag frequently within the long job, or use kernel primitives to
> abort the whole thread, which can be messy.
Ick. What I mean really doesn't hurt that bad :) These systems
didn't use a large number of threads.
> (I can't resist noting here
> that Ada has a nice mechanism for aborting computations, called
> "asynchronous transfer of control".)
>
>> If you'll look at Bruce Powell Douglass' book, I beleive it stresses
>> that run to completion is a virtue in high reliability systems. Not
>> pushing that but it's simply one book I know about.
>
> The object-oriented gurus love run-to-completion because it makes it
> look as if the object-method-statechart structure is natural for
> real-time systems and lets one avoid the "difficulties" of preemption
> and critical sections. But in practice, in such designs it is often
> necessary to run different objects/statecharts in different threads, at
> different priorities, to get preemption and responsiveness.
>
My use of the paradigm preceeds any of the object gurus, IMO. OO
hadn't quite propagated to realtime in the '80s in a
serious way. I have since used things like ObjecTime, Rose and
Rhapsody, but we'd done things like this with nothing
but a 'C' compiler on bare metal before.
Some of those things had hundreds of states ( which may be what
you are saying is the horror of it ) but I did not see that as a
curse. We were able to log events and state for testing and
never had a defect that wasn't 100% reproducible because of it...
> Preemption brings some risks, since the programmers can mess up the
> inter-thread data sharing and synchronization. If your system can be
> designed in a natural way without preemption, do so.
I, unfortunately, don't really know what that means.
> But if you can
> avoid preemption only by artificially slicing the longer jobs into small
> pieces, you introduce similar risks (the order of execution of the
> pieces, and their interactions, may be hard to foresee) and much
> unnecessary complexity of code.
>
If for any case, any of that is true, then yes :) There's no
crime in using whatever works.
In summary, though, my statement stands:
I do not see how having the system timer tick swap out a running
thread improves the reliability or determinacy of a system, nor
how it makes the design of a system easier.
--
Les Cargill