EmbeddedRelated.com
Forums

RTOS popularity

Started by Philipp Klaus Krause December 26, 2015
On Thu, 07 Jan 2016 08:10:28 +0200, upsidedown@downunder.com wrote:

>On Wed, 6 Jan 2016 12:09:24 -0600, Les Cargill ><lcargill99@comcast.com> wrote: > >>With >>"run to completion". you can get a lot closer to proving the system >>correct to within some epsilon ( sometimes a rather large epsilon ). > >Like Windows 3.x environment :-) :-)
You laugh, but Windows 3 was perfectly capable of HRT in the 10s of milliseconds range. On fast 80486, interrupt response to userspace tasks and [cooperative] task scheduling both were < 5us. Between 1993 and 1996 I was implementing HRT machine vision QC/QA systems on Windows 3.1, running first on 80486 dx4 and then on Pentium. These systems had multiple tasks to perform with hard cycle times ranging from 150ms - 900ms. There was additional hardware to do image processing heavy lifting [the CPUs were not capable] but that hardware required constant attention: it performed a single vector or kernel operation and then stopped, waiting for instruction. I used multiple processes running under Windows to control multiple hardware sets for different purposes. The largest system performed inspections on 180 parts (30 each for 6 cameras) every 900ms. It had 6 sets of imaging hardware and 3 I/O boards for interfacing with the part conveyor. Each hardware set consisted of 5 boards: 2 slotted with 3 daughters attached, linked by a private bus. Needed a 20-slot chassis to fit them all. 6 Windows processes performed control and processing, and a 7th provided the GUI for operator control, logging and graphic display of results. Win3 processes gave way to multi-threading in Win95 and then NT4, and finally the CPUs became capable of handling the image processing and the separate imaging hardware was eliminated. These systems were sold commercially in the USA and Europe until mid 2004. AIUI, some still are available today from distributors and on secondary markets - the manufactuing equipment they were designed to work with still is in use in parts of the world. George
On Thu, 07 Jan 2016 05:18:46 -0500, George Neuner
<gneuner2@comcast.net> wrote:

>On Thu, 07 Jan 2016 08:10:28 +0200, upsidedown@downunder.com wrote: > >>On Wed, 6 Jan 2016 12:09:24 -0600, Les Cargill >><lcargill99@comcast.com> wrote: >> >>>With >>>"run to completion". you can get a lot closer to proving the system >>>correct to within some epsilon ( sometimes a rather large epsilon ). >> >>Like Windows 3.x environment :-) :-) > >You laugh, but Windows 3 was perfectly capable of HRT in the 10s of >milliseconds range. On fast 80486, interrupt response to userspace >tasks and [cooperative] task scheduling both were < 5us. > >Between 1993 and 1996 I was implementing HRT machine vision QC/QA >systems on Windows 3.1, running first on 80486 dx4 and then on >Pentium. These systems had multiple tasks to perform with hard cycle >times ranging from 150ms - 900ms.
Assuming that you used the message queue mechanism, in which all messages are sent to a single message queue and then dispatched to a specific "task" for processing. In that case, each "task" must have a maximum allowed processing time and if the operation could not be completed in that time, each "task" was required to submit a new message into the main message queue for the next step of a complex operation. A bad behaving program might not do that, but with all programs in your own control, this should not be a big issue. This was very similar to early IBM mainframe CICS data entry system and similar features can be seen in web-servers (context saving, cookies) today.
On 07.1.2016 &#1075;. 09:00, upsidedown@downunder.com wrote:
> On Wed, 6 Jan 2016 20:43:43 +0200, Dimiter_Popoff <dp@tgi-sci.com> > wrote: > >> On 06.1.2016 ?. 19:35, upsidedown@downunder.com wrote: >>> ... I have used small >>> pre-emptive kernels at least for 8080/8085/6502/6809. >>> >> >> And no 6800 or HC11? Come on :-). >> >> The first MT kernel I wrote was for the 6809... maintained a bitmap >> of memory clusters to allocate/deallocate on dynamic requests by tasks >> (there was no need for that but I was doing what I thought was >> interesting to do...). My second one was 68020, a one-off thingie >> which I did for my then employer in Cologne (late 80-s). >> >> Oops, I never wrote a 6800 kernel really :D. Just a HC11 - no dynamic >> RAM allocation of its 512 bytes though... >> >> I never touched a 6502/80xx part though. > > A preemptive system requires private stacks for each task. thus some > stack pointer relative addressing modes would be nice, which > unfortunately lacks in many older 8 bitters.
Indeed. Come to think of it I remember doing something on the 6800 with a lot of tsx/txs (for those unfamiliar with the 6800: tsx sort of "transfers" sp to x ) but I am not sure what it was.... May be it was a waterflow/level meter, can't remember. I did this on the HC11 but then one at least had that Y register and D-accumulator (IIRC, or was D on 09 only - no, I think it was there on the 11), this made things much much easier than on the 6800 where retrieving an address from the stack and modifying it was outright impractical without doing it via some swi call.
> > Even with such old processors, one could allocate the stack spaces at > compile/link time, thus each task could use precompiled absolute > addressing to access your data.
Oh yes, that's how I do it even today on smaller mcu-s (e.g. the mcf52211, with its 16k RAM which is a lot compared to the 512 bytes I had on the HC11 where I was doing it the same of course). I am trying to perceive that 80 MHz 68k part with 16k RAM - which is close to an oldie nowadays - from what must have been my 80-s point of view and I am not sure I manage... Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On Thu, 07 Jan 2016 13:46:12 +0200, upsidedown@downunder.com wrote:

>On Thu, 07 Jan 2016 05:18:46 -0500, George Neuner ><gneuner2@comcast.net> wrote: > >>You laugh, but Windows 3 was perfectly capable of HRT in the 10s of >>milliseconds range. On fast 80486, interrupt response to userspace >>tasks and [cooperative] task scheduling both were < 5us. >> >>Between 1993 and 1996 I was implementing HRT machine vision QC/QA >>systems on Windows 3.1, running first on 80486 dx4 and then on >>Pentium. These systems had multiple tasks to perform with hard cycle >>times ranging from 150ms - 900ms. > >Assuming that you used the message queue mechanism, in which all >messages are sent to a single message queue and then dispatched to a >specific "task" for processing. In that case, each "task" must have a >maximum allowed processing time and if the operation could not be >completed in that time, each "task" was required to submit a new >message into the main message queue for the next step of a complex >operation. A bad behaving program might not do that, but with all >programs in your own control, this should not be a big issue.
More involved than you imagine. The image processing and I/O boards were interrupt sources. The image hardware was single step: it performed one operation generating a completion interrrupt and then stopped waiting for new instruction. The I/O boards could interrupt on any signal - depending on environment they would be set to interrupt on part-in-place enables and/or strobe or frame-grab triggers. I turned those interrupts into Windows messages to effect task scheduling: a task would initiate its next image operation, set a timer and yield waiting for the completion message. Time/operation was calculable by # of pixels and was guaranteed, so if the timer expired the task aborted because that usually meant the (very static sensitive) hardware was confused and would have to be re-initialized. I reprogrammed the 8253 so that GDI software timers could be set down to 10ms with reasonable accuracy. I didn't care about the clock running fast, although a custom driver could have fixed that. At the top level, messages were exchanged for configuration, changing modes, etc. - but the tasks performed their cyclic inspection work autonomously. In Windows 3, tasks had no memory protection - they all were in the same space and could RPC each other's functions directly with little overhead (essentially just a far call segment change). My design exploited that mercilessly. The GUI task actually held all the code to drive the hardware, using it for off-line setup and diagnostic single camera inspections. The on-line display/inspection tasks effectively were userspace "threads" that RPC'd functions of the GUI task to do their work. These designs ported quite nicely to Win95 with its real threads. Win95 needed a kernel mode reflection driver for interrupts, but nothing special was needed for timers: Win95 itself set the 8253 to 1ms resolution, and its GDI software timers could reliably be set down to ~16ms which was quite sufficient for the "watchdog" purpose. Had to implement a timer driver for NT4, but that was the last OS that hosted the image hardware. Around 2000 the CPUs finally were fast enough to (re)implement using SSE [though some configurations required using a dual processor to achieve their timing]. Ah, the good ole days 8-) George
upsidedown@downunder.com wrote:
> On Wed, 6 Jan 2016 12:09:24 -0600, Les Cargill > <lcargill99@comcast.com> wrote: > >> upsidedown@downunder.com wrote: >>> On Mon, 4 Jan 2016 10:32:38 +0100, pozz <pozzugno@gmail.com> wrote: >>> >>>> Il 26/12/2015 19:54, Philipp Klaus Krause ha scritto: >>>>> There are a lot of RTOSes around. I wonder which are the most used ones. >>>>> In particular, I'm interested in the free ones supporting the STM8, >>>>> which currently are: >>>>> >>>>> * OSA >>>>> * atomthreads >>>>> * ChibiOS >>>>> * ScmRTOS >>>>> >>>>> But I'd also like to know about the general 8/16-bit situation. >>>> >>>> Maybe you already made this question to yourself. Anyway I make it: why >>>> do you need a RTOS on those kind of platforms? >>> >>> >>> Why not ? >>> >>> If you just have less than 10 KiB of code and a single programmer, >>> some simple state machines would be enough. >>> >>> With tens of KiB of code or multiple programmers, the project >>> management with a pre-emptive RTOS simplifies a lot. I have used small >>> pre-emptive kernels at least for 8080/8085/6502/6809. >>> >>> >> >> >> Preemptive opens up a lot of disparate and ugly cans of worms. > > I have successfully used preemptive operating systems for 40 years. > >> With >> "run to completion". you can get a lot closer to proving the system >> correct to within some epsilon ( sometimes a rather large epsilon ). > > Like Windows 3.x environment :-) :-) >
Yes, exactly like that.
>> The only thing preemptive gets you is if somebody's thread is taking too >> long, you jerk the CPU away from them. Well, maybe you really want that >> to be an exception rather than a context switch. > > The first thing in RTOS system design is to divide the application to > manageable tasks, estimate how long each needs to run and assign > priorities to the tasks. > > If there are performance problems with some tasks, never try to > increase the priority of the starving task, instead look for other > tasks, which priority can be _lowered_ without suffering too much, if > interrupted by a short run time high priority tasks. > > Use a server task for each resource such as serial port or shared data > structure, so not much need for object locking (with priority > inversion etc. issues). >
I have really limited patience with designs where task priorities affect system behavior. -- Les Cargill
On 08.1.2016 &#1075;. 02:08, Les Cargill wrote:
> upsidedown@downunder.com wrote: > ...... >> The first thing in RTOS system design is to divide the application to >> manageable tasks, estimate how long each needs to run and assign >> priorities to the tasks. >> >> If there are performance problems with some tasks, never try to >> increase the priority of the starving task, instead look for other >> tasks, which priority can be _lowered_ without suffering too much, if >> interrupted by a short run time high priority tasks. >> >> Use a server task for each resource such as serial port or shared data >> structure, so not much need for object locking (with priority >> inversion etc. issues). >> > > I have really limited patience with designs where task priorities affect > system behavior. >
I think you'd want to rephrase that. If system behaviour is unaltered by task priorities what are they for? I believe I can guess what you mean but I am not sure it is obvious enough to everyone. Say you lower the priority of the task tracking the mouse pointer; below some point this will inevitably become noticeable. OTOH, as upsideddownunder suggests, the right thing is to set its priority very high knowing it takes a minute amount of system time so it always does get it when it needs it at an acceptable latency. Dimiter
Dimiter_Popoff wrote:
> On 08.1.2016 &#1075;. 02:08, Les Cargill wrote: >> upsidedown@downunder.com wrote: >> ...... >>> The first thing in RTOS system design is to divide the application to >>> manageable tasks, estimate how long each needs to run and assign >>> priorities to the tasks. >>> >>> If there are performance problems with some tasks, never try to >>> increase the priority of the starving task, instead look for other >>> tasks, which priority can be _lowered_ without suffering too much, if >>> interrupted by a short run time high priority tasks. >>> >>> Use a server task for each resource such as serial port or shared data >>> structure, so not much need for object locking (with priority >>> inversion etc. issues). >>> >> >> I have really limited patience with designs where task priorities affect >> system behavior. >> > > I think you'd want to rephrase that. If system behaviour is unaltered by > task priorities what are they for? I believe I can guess what you mean > but I am not sure it is obvious enough to everyone. > > Say you lower the priority of the task tracking the mouse pointer; below > some point this will inevitably become noticeable. OTOH, as > upsideddownunder suggests, the right thing is to set its priority very > high knowing it takes a minute amount of system time so it always does > get it when it needs it at an acceptable latency. > > Dimiter >
I suspect if you think about it a wee bit, you'll see that priority juggling is a sign of bad system design. As you note, you may never be able to eliminate it altogether. The canonical example ( the Apollo lunar lander ) had pretty much two priorities. That's not too bad, the system worked beautifully and it was a major innovation. I've seen designs that depended on ... a lot more than two levels of task priority. I didn't care for that :) -- Les Cargill
On 08.1.2016 &#1075;. 04:44, Les Cargill wrote:
> Dimiter_Popoff wrote: >> On 08.1.2016 &#1075;. 02:08, Les Cargill wrote: >>> upsidedown@downunder.com wrote: ...... >>>> The first thing in RTOS system design is to divide the >>>> application to manageable tasks, estimate how long each needs >>>> to run and assign priorities to the tasks. >>>> >>>> If there are performance problems with some tasks, never try >>>> to increase the priority of the starving task, instead look for >>>> other tasks, which priority can be _lowered_ without suffering >>>> too much, if interrupted by a short run time high priority >>>> tasks. >>>> >>>> Use a server task for each resource such as serial port or >>>> shared data structure, so not much need for object locking >>>> (with priority inversion etc. issues). >>>> >>> >>> I have really limited patience with designs where task priorities >>> affect system behavior. >>> >> >> I think you'd want to rephrase that. If system behaviour is >> unaltered by task priorities what are they for? I believe I can >> guess what you mean but I am not sure it is obvious enough to >> everyone. >> >> Say you lower the priority of the task tracking the mouse pointer; >> below some point this will inevitably become noticeable. OTOH, as >> upsideddownunder suggests, the right thing is to set its priority >> very high knowing it takes a minute amount of system time so it >> always does get it when it needs it at an acceptable latency. >> >> Dimiter >> > > > I suspect if you think about it a wee bit, you'll see that priority > juggling is a sign of bad system design.
It certainly can be that - and it probably is that in the vast majority of cases. But the generalization is just wrong.
> As you note, you may never be able to eliminate it altogether.
But we can never eliminate priority out of our lives no matter what we do, same with our systems. They may be designed with fixed priorities but they are there. I personally do have use for varying priorities under DPS - for example in the netMCA there are various tasks some of which use a lot of the CPU resource - e.g. the filtering task can take up well above 50% of it depending on the incoming signal. Then there are the VNC server tasks which check the entire "display" framebuffer memory for changes, compress and send when detected; then there are the ethernet/ip inbound tasks which have to cope with all the traffic... And then there is the user I/O which must remain quick enough all the time regardless of what the incoming signal to process is, what the network activity is and what the disk activity is. You simply have to set the priorities right for everything to work smoothly and they are not equal for all tasks. Of course the DPS scheduler is a completely different animal from what you may know from windows.
> > The canonical example ( the Apollo lunar lander ) had pretty much > two priorities. That's not too bad, the system worked beautifully and > it was a major innovation. > > I've seen designs that depended on ... a lot more than two levels of > task priority. I didn't care for that :) >
If you mean changing priorities within reason could kill the system I would agree with you I suppose. Not many things if any are better than simplification but it is simply bound by limits like anything else. Try to simplify the netMCA I talked above to equal priorities and it will become a lot more complex overall. Two priorities - may be but more like may be not, then how much simplification will come out of processing one bit vs. 16 bits for priority. And once you have a good tool you just use it. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 08/01/16 00:08, Les Cargill wrote:
> I have really limited patience with designs where task priorities affect system > behavior.
True :) One of my rules-to-thumb is to ban fiddling with priorities in hard realtime systems - because it usually is fiddling. When pushed I'll accept "high priority" and "normal priority", and can eventually be forced to concede a little more flexible.
On 08/01/16 00:17, Dimiter_Popoff wrote:
> I think you'd want to rephrase that. If system behaviour is unaltered by > task priorities what are they for? I believe I can guess what you mean > but I am not sure it is obvious enough to everyone.
I've seen people notice a rare timing failure, decide that it was because task priorities are to blame, and that fiddling with the priorities will cure the problem. Occasionally such fiddling does work, but more often it either makes the problem more difficult to detect and cure, or creates a different problem. But I'm sure you know that.