EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Linux and priorities within kernel space

Started by Martin July 21, 2004
I have maybe stupid question. From QNX I know every each driver acts
as process having threads. Each thread has schedulinng scheme and it's
own priority. How it's in Linux? I mean no user space, but kernel
space. It looks for me like unpredictable bunch of mess. Can I somehow
manage priorities even for the drivers? I'm using Denx's distribution.
Or do I have to use some extensions?
On Wed, 21 Jul 2004 07:54:57 -0700, Martin wrote:
> I have maybe stupid question. From QNX I know every each driver acts > as process having threads. Each thread has schedulinng scheme and it's > own priority. How it's in Linux? I mean no user space, but kernel > space. It looks for me like unpredictable bunch of mess. Can I somehow > manage priorities even for the drivers? I'm using Denx's distribution. > Or do I have to use some extensions?
It's been many years since I did anything with QNX, but I seem to recall that QNX was designed and built using a micro-kernel approach: the kernel is small and provides basic O/S functions and message passing. After that, everything (including drivers) are basically applications, which communicate by message passing. Linux on the other hand uses the older monolithic (big) kernel approach. I haven't done any real Linux kernel hacking, so I can't say for sure, but I would think it might be harder to (arbitrarily) control priorities of drivers, etc. The monolithic kernel is pretty big and therefore complex. When working on real-time embedded controllers on small CPUs like Z80s (and PDP-11s) one would prioritize drivers through IRQ hierarchies. Some of those approaches did "significant work" in some lower priority ISRs. I remember in RT-11, you would kick off some I/O, then sustain it through a chain of interrupt services (programmed I/O, usually doing blocks of DMA) until some terminating condition. I'm not sure if that kind of approach is practical in Linux? A concern with a large monolithic kernel would be to avoid deadlock conditions, so you don't want anything pending too long, or anything that would tie together any actions that involve several IRQs. I believe most Linux I/O is mostly quick in/out, using standardized libraries for handling chains of buffers. Because of complexities, I would think you write drivers for Linux assuming that they are the only thing going, and make them "correct". If stuff doesn't work, the usual recourse seems to be "get a faster CPU". I'm actually appalled by some of the mailing list and newsgroup posts that complain about not being able to do audio/sound, and the user has a P4 running at nGHz with 100s MB memory, etc. People used to do real industrial control psychophysical experimenta on Z80s and PDP-11s with less than 64KB and maybe some MB of disk (if any). Somewhere things seem to have gone off the rails. I'm interested in this topic as well. Related topic: why hasn't the Mach kernel caught on? -- Juhan Leemet Logicognosis, Inc.

The 2024 Embedded Online Conference