Reply by Jean-Francois Michaud June 8, 20112011-06-08
On May 22, 3:08=A0pm, John Passaniti <john.passan...@gmail.com> wrote:
> On May 22, 7:25=A0am, Albert van der Horst <alb...@spenarnc.xs4all.nl> > wrote: > > > Isn't this overly pessimistic? > > The thing about this discussion that bothers me is talking about > interrupts in the abstract.
We're in trouble then because thoughts are abstractions so talking in the abstract cannot be overcome ;-). [SNIP] Jean-Francois Michaud
Reply by Brad May 23, 20112011-05-23
On May 22, 3:25=A0pm, wzab <wza...@gmail.com> wrote:
> Therefore approach with blocking of interrupts in ISR and passing > control to the high level Forth word (which may be defined by user in > interactive session) seemed very good.
I don't see any problem with having the inner interpreter poll for interrupts, as long as you carefully review words that alter stack pointers such as THROW and PAUSE. You may need to disable interrupts in parts of such words. I remember a story of someone who modified the inner interpreter on an early single-interrupt computer to emulate a multi-interrupt machine and it was a great success. -Brad
Reply by Arlet Ottens May 23, 20112011-05-23
On 05/23/2011 12:20 AM, Elizabeth D Rather wrote:
> On 5/21/11 11:46 PM, Arlet Ottens wrote: >> On 05/22/2011 10:15 AM, Elizabeth D Rather wrote: >> >>>> >>>>> To a limited extent you can get away with pushing something >>>>> temporarily >>>>> on whatever is the current data stack, provided you get it off again >>>>> before exiting the ISR. But that's a far cry from executing high level >>>>> Forth. >>>> >>>> Why for a limited extent ? And of course you have to pop everything >>>> from >>>> the stack that you put on. >>> >>> My point is that all of what you describe adds unnecessary overhead. If >>> you follow the model I described above, you only need a few instructions >>> in your ISR (typically 3-5 for a simple "read a value and put it >>> somewhere"), and it would cost more than that just to get into a >>> high-level Forth routine. Much easier to just do the bare minimum at >>> interrupt time and let the task responsible for the interrupting device >>> do the high level processing. >> >> If you defer the work to a task, you have the extra overhead of context >> switches, which are generally more expensive than saving state in the >> ISR. If you can do all the work in the ISR, or at least do all the work >> in the majority of cases, this actually costs less overall. > > In my experience, there's usually some component of the response that's > time-critical and some that isn't. In the case of a clock tick, that > event has to be registered immediately, but there isn't any further > processing to occur. In the case of receipt of an interrupt that signals > the completion of a process that some task has been waiting for (e.g. a > disk read) notifying the task is all that has to be done, and *all* the > subsequent activity is at the task level. My objective is to handle the > time-critical steps, whatever they are, *without delay*, which means > without scheduling a task or instantiating a context for high-level > Forth (and I agree with Stephen that trying to do that for an ITC Forth > makes no sense at all). There is no overhead involved in entering an ISR > in our systems.
Sure, but there are also different problems. For example, a stepper motor hooked up to a bit-bang SPI bus that needs to be advanced 2000 times per second. You don't want too much jitter, so doing it in the timer ISR itself becomes a viable option. It would be nice to have the option so you could choose to run the ISR in assembly, or a higher level language, depending on the particular circumstances. In my experience, I've written 90% of my ISR code in a higher level language.
> > In a multitasking Forth, context switches are occurring, but (at least > in the implementations I'm familiar with) they are vastly less costly > than in more conventional multi-tasked executives: typically on the > order of half-dozen machine instructions to activate a task and fewer to > de-activate one.
If it only takes a half-dozen instruction to do a context switch, I'm sure it doesn't have to take more to set up an environment to run Forth in an interrupt context.
> > Cheers, > Elizabeth >
Reply by rickman May 22, 20112011-05-22
On May 22, 2:48=A0pm, Elizabeth D Rather <erat...@forth.com> wrote:
> On 5/22/11 1:25 AM, Albert van der Horst wrote: > > > > > In article<Hr-dneayMvMBl0XQnZ2dnUVZ_vmdn...@supernews.com>, > > Elizabeth D Rather<erat...@forth.com> =A0wrote: > ... > >> The problem with writing your interrupt handler in high-level Forth is > >> that the interrupt must first instantiate a Forth environment: stacks, > >> registers, whatever the implementation requires, and do so in such a w=
ay
> >> as to not affect whatever task was running when the interrupt occurred=
.
> >> =A0 This adds some inevitable overhead. > > > Isn't this overly pessimistic? Compared to interrupting an > > arbitrary program (and provided there is one program space) > > you need to save two less registers, containing the return > > stack pointer and the data stack pointer. Instead, use them, > > which saves initialisation. They must balance out on return. > > This assumes that the Forth uses its stacks properly, i.e. no data is > > stored outside of the area "protected" by the data stack pointer. 1) > > For the rest the usual caveat's for interrupts apply: save all register=
s
> > that you use. > > In the first place, the interrupt routine needs to save *only* the > registers that it uses, which may be as few as none, depending on the > processor and what you have to do. =A0And I agree that you need to leave > the processor exactly as you found it. > > But you're missing the point that to run high-level Forth you need more > things than two stack pointers. =A0There are other registers, user > variables, and other resources. =A0This makes instantiating a Forth > environment costlier than simply branching to a few machine instructions.
Ah, now I get it. You are thinking in terms of the FULL Forth environment. That is not needed in an ISR. Typically and ISR for an embedded application only needs to do some basic memory access and IO. If you are talking about a UNIX system, then maybe more is required. But for the OP, clearly only a subset is needed that does not require that level of overhead. Rick
Reply by rickman May 22, 20112011-05-22
On May 22, 2:41=A0pm, Elizabeth D Rather <erat...@forth.com> wrote:
> On 5/22/11 4:47 AM, wzab wrote: > > > Thanks for all replies. > > I have rather small experience with writing Forth programs, but quite > > good in writing Linux drivers in C. > > Usually the interrupt is not handled in context of particular task. It > > simply uses the protected mode stack of task which was runing when > > interrupt occured. > > Could it be done in similar way in Forth? > > Yes, that's essentially how we do it, except that some processors > (particularly low-end embedded microcontrollers) don't have a "protected > mode". > > > When interrupt occurs in the assembly language we simply switch > > interrupts off (as the hardware source of interrupt may be still > > active) and set a global flag informing that the interrupt is pending, > > thets all. > > Then the inner Forth interpreter after completion of the current > > word(?) should execute the Forth word responsible for servicing the > > interrupts. This word would use the stacks of the current, random > > task. If interrupt is associated with data transfer, then tha data > > should be read/written to/from a global variable (or could it be a > > task specific variable, if particular task registers the interrupt > > routine?). > > The question is if I can assume, that the latency associated with > > waiting to the end of execution of current word will be sufficiently > > small? > > If it's more than a single machine instruction, the latency is longer > than the approach I'm advocating. =A0Obviously, it depends on the word > being executed, which could be anywhere from a microsecond to much, much > longer. =A0In most real-time applications, you can't afford that level of > uncertainty for the time-critical part of an interrupt handler. > > Our approach separates the time-critical response from the higher-level > processing, which can certainly take place in high-level Forth, at the > task level. > > Consider the data acquisition situation. =A0An interrupt signals the > presence of a value on the A/D. =A0The interrupt code reads the value, > stores it in a buffer the application has provided, and sets up the next > conversion. =A0It then sets the flag that enables the task responsible fo=
r
> the data to wake up and process the value. =A0This needn't take more than > a very few instructions, needs no stacks, and (depending on the > processor) may not even need any registers. > > The actual logic of processing the data may be anywhere from simple to > complex. =A0If the data is coming in faster than the task can process it, > it can go into a buffer. =A0That adds management of buffer pointers to th=
e
> ISR, but that isn't hard, either. > > Cheers, > Elizabeth
I have to say I am lost. The description you give above only describes the division between the ISR and the non-ISR code for dealing with an ADC. I don't see anything that relates to the issue of implementing the ISR in Forth. Such a simple ISR can use the stack of what ever process it interrupted, as you say, as long as it doesn't need to store anything on the stack. I expect that is a given and the ISR would use static variables for such items. You keep referring to the "overhead" of setting up a Forth environment. I guess I don't have the insight into a complex Forth that would work that way. I am picturing a very simple Forth that has a single task of main line code and interrupts. I don't know what would be required for multitasking other than multiple stacks. I'm not sure of the capabilities of Riscy Pygness. Do you really think this is such an absolute that ISR in Forth is universally a bad idea? Rick
Reply by wzab May 22, 20112011-05-22
On May 22, 8:41=A0pm, Elizabeth D Rather <erat...@forth.com> wrote:

> If it's more than a single machine instruction, the latency is longer > than the approach I'm advocating. =A0Obviously, it depends on the word > being executed, which could be anywhere from a microsecond to much, much > longer. =A0In most real-time applications, you can't afford that level of > uncertainty for the time-critical part of an interrupt handler. > > Our approach separates the time-critical response from the higher-level > processing, which can certainly take place in high-level Forth, at the > task level. >
Well, so this is an approach which is used in interrupt handlers e.g. in Linux - servicing of interrupt is split between the ISR which runs as fast as possible and handles most critical actions related to interrupt (e.g. copying data from hardware register, where they could be overwriten by the next received data, to the buffer), and the deferred interrupt routine ("bottom half", "tasklet", "workqueue" whatever implementation will be used). What I wanted to achieve was to give my students a way to experiment with LPC1769 based system, including interrupts, without continuous reflashing of CPU. The Riscy Pygness Forth seems to be an ideal solution, however I need a way to handle interrupts completely on the Forth level. Additionally I'd like to hide as few details of interrupt handling as possible. Therefore approach with blocking of interrupts in ISR and passing control to the high level Forth word (which may be defined by user in interactive session) seemed very good. As this will be used mainly for didactical tasks, reasonable decrease of performance may be accepted. -- Regards, Wojtek
Reply by Elizabeth D Rather May 22, 20112011-05-22
On 5/21/11 11:46 PM, Arlet Ottens wrote:
> On 05/22/2011 10:15 AM, Elizabeth D Rather wrote: > >>> >>>> To a limited extent you can get away with pushing something temporarily >>>> on whatever is the current data stack, provided you get it off again >>>> before exiting the ISR. But that's a far cry from executing high level >>>> Forth. >>> >>> Why for a limited extent ? And of course you have to pop everything from >>> the stack that you put on. >> >> My point is that all of what you describe adds unnecessary overhead. If >> you follow the model I described above, you only need a few instructions >> in your ISR (typically 3-5 for a simple "read a value and put it >> somewhere"), and it would cost more than that just to get into a >> high-level Forth routine. Much easier to just do the bare minimum at >> interrupt time and let the task responsible for the interrupting device >> do the high level processing. > > If you defer the work to a task, you have the extra overhead of context > switches, which are generally more expensive than saving state in the > ISR. If you can do all the work in the ISR, or at least do all the work > in the majority of cases, this actually costs less overall.
In my experience, there's usually some component of the response that's time-critical and some that isn't. In the case of a clock tick, that event has to be registered immediately, but there isn't any further processing to occur. In the case of receipt of an interrupt that signals the completion of a process that some task has been waiting for (e.g. a disk read) notifying the task is all that has to be done, and *all* the subsequent activity is at the task level. My objective is to handle the time-critical steps, whatever they are, *without delay*, which means without scheduling a task or instantiating a context for high-level Forth (and I agree with Stephen that trying to do that for an ITC Forth makes no sense at all). There is no overhead involved in entering an ISR in our systems. In a multitasking Forth, context switches are occurring, but (at least in the implementations I'm familiar with) they are vastly less costly than in more conventional multi-tasked executives: typically on the order of half-dozen machine instructions to activate a task and fewer to de-activate one. Cheers, Elizabeth -- ================================================== Elizabeth D. Rather (US & Canada) 800-55-FORTH FORTH Inc. +1 310.999.6784 5959 West Century Blvd. Suite 700 Los Angeles, CA 90045 http://www.forth.com "Forth-based products and Services for real-time applications since 1973." ==================================================
Reply by John Passaniti May 22, 20112011-05-22
On May 22, 7:25=A0am, Albert van der Horst <alb...@spenarnc.xs4all.nl>
wrote:
> Isn't this overly pessimistic?
The thing about this discussion that bothers me is talking about interrupts in the abstract. But in my experience how you handle interrupts depends entirely on what is causing the interrupt. There is a huge difference between an interrupt indicating the user pressed a button on a panel versus an interrupt coming in from an Ethernet controller receiving packets at full speed. And there is a huge difference between a RTC interrupt ticking away every second versus driving a matrix of LEDs that you're pulse-width modulating brightness at 1kHz. Sometimes an interrupt comes in from human-speed events and those are so slow you don't care if it takes a few milliseconds here or there. Other times not responding to an interrupt fast enough means you're not going to be transferring data fast enough on a channel or that small perturbations in your timing will be easily detected by human senses. So I think we need to factor at least three things into this discussion: 1. How quickly you need to react to an interrupt. 2. How many cycles your interrupt service routine will take. 3. The rate of those interrupts. Yeah, the interrupt mechanism is exactly the same, but the question of overhead starts to matter more. Personally, I do tend to code my ISRs in a high-level language. But I take a "trust but verify" approach and look at the quality of the code generated. And on critical interrupts, you know I'm going to be instrumenting my code and hooking up a 'scope to measure exactly how much time (and what percentage of time that is from the larger application) is taken. These things matter, and regardless of how one chooses to implement an ISR, you better have a very good understanding of the overheads, resources used, and other limitations.
Reply by Roberto Waltman May 22, 20112011-05-22
Arlet Ottens  wrote:
>I presume that there is some sort of 'current task' that points to the >task currently executing, and that you can find the current stack from >there.
Not necessarily. In the system I am currently working on, breaking at ramdom many times puts me in the kernel code, in the middle of switching staks between tasks.
> Alternatively, use a dedicated interrupt stack.
Thta's always a cleaner solution. -- Roberto Waltman [ Please reply to the group, return address is invalid ]
Reply by Elizabeth D Rather May 22, 20112011-05-22
On 5/22/11 1:25 AM, Albert van der Horst wrote:
> In article<Hr-dneayMvMBl0XQnZ2dnUVZ_vmdnZ2d@supernews.com>, > Elizabeth D Rather<erather@forth.com> wrote:
...
>> The problem with writing your interrupt handler in high-level Forth is >> that the interrupt must first instantiate a Forth environment: stacks, >> registers, whatever the implementation requires, and do so in such a way >> as to not affect whatever task was running when the interrupt occurred. >> This adds some inevitable overhead. > > Isn't this overly pessimistic? Compared to interrupting an > arbitrary program (and provided there is one program space) > you need to save two less registers, containing the return > stack pointer and the data stack pointer. Instead, use them, > which saves initialisation. They must balance out on return. > This assumes that the Forth uses its stacks properly, i.e. no data is > stored outside of the area "protected" by the data stack pointer. 1) > For the rest the usual caveat's for interrupts apply: save all registers > that you use.
In the first place, the interrupt routine needs to save *only* the registers that it uses, which may be as few as none, depending on the processor and what you have to do. And I agree that you need to leave the processor exactly as you found it. But you're missing the point that to run high-level Forth you need more things than two stack pointers. There are other registers, user variables, and other resources. This makes instantiating a Forth environment costlier than simply branching to a few machine instructions. Cheers, Elizabeth -- ================================================== Elizabeth D. Rather (US & Canada) 800-55-FORTH FORTH Inc. +1 310.999.6784 5959 West Century Blvd. Suite 700 Los Angeles, CA 90045 http://www.forth.com "Forth-based products and Services for real-time applications since 1973." ==================================================