EmbeddedRelated.com
Forums

Has someone tried to add interrupt support to Riscy Pygness?

Started by wzab May 21, 2011
Thanks for all replies.
I have rather small experience with writing Forth programs, but quite
good in writing Linux drivers in C.
Usually the interrupt is not handled in context of particular task. It
simply uses the protected mode stack of task which was runing when
interrupt occured.
Could it be done in similar way in Forth?
When interrupt occurs in the assembly language we simply switch
interrupts off (as the hardware source of interrupt may be still
active) and set a global flag informing that the interrupt is pending,
thets all.
Then the inner Forth interpreter after completion of the current
word(?) should execute the Forth word responsible for servicing the
interrupts. This word would use the stacks of the current, random
task. If interrupt is associated with data transfer, then tha data
should be read/written to/from a global variable (or could it be a
task specific variable, if particular task registers the interrupt
routine?).
The question is if I can assume, that the latency associated with
waiting to the end of execution of current word will be sufficiently
small?

If the above approach is correct, then interrupt handling in Riscy
Pygness should be associated with:
1.
Adding of simple "universal" interrupt handle and "inInterrupt" flag.
2.
 Quite a small modification of nxTab routine - if the inInterrupt flag
is set, before loading of the new word:
      ldrh W, [IPTR], #2      ; read unsigned half-word then bump IPTR
by 2
 We should store the old IPTR value (on return stack?), load the IPTR
with the address of our high level interrupt routine and clear
the inInterrupt flag.
   The high level interrupt routine should inactivate the hardware
interrupt source (e.g. read the data from the serial buffer) and
enable interupts (if we allow nested interrupts, if not - exiting of
high level interrupt routine should reenable interrupts).
   At the end of high level interrupt routine the previous value of
IPTR should be restored from the return stack (so the interrupt
servicing word should have the bit 0 ("jump flag") cleared ).

   Is the above described implementation reasonable, or have I missed
something?
--
TIA & Regards,
Wojtek
On May 22, 1:08=A0pm, stephen...@mpeforth.com (Stephen Pelc) wrote:
> level Forth. The example I always use was a four channel 9600 baud > bit-banged serial driver fitted to a vending machine. It ran from a > 38400 Hz timer interrupt and had a worst case interrupt duration > (measured) of 1.5us on a 60MHz LPC2148. On a Cortex LPC17xx, it > would be faster still.
To digress slightly: You would get better distortion tolerance (symmetrical 33%) using 28800Hz interrupt. The 4*sampling gives asymmetrical (25-50% and 50-75% on the delay after the start pulse. 28800 vs 38400 interrupts per second also reduces overhead.
On 5/22/11 4:47 AM, wzab wrote:
> Thanks for all replies. > I have rather small experience with writing Forth programs, but quite > good in writing Linux drivers in C. > Usually the interrupt is not handled in context of particular task. It > simply uses the protected mode stack of task which was runing when > interrupt occured. > Could it be done in similar way in Forth?
Yes, that's essentially how we do it, except that some processors (particularly low-end embedded microcontrollers) don't have a "protected mode".
> When interrupt occurs in the assembly language we simply switch > interrupts off (as the hardware source of interrupt may be still > active) and set a global flag informing that the interrupt is pending, > thets all. > Then the inner Forth interpreter after completion of the current > word(?) should execute the Forth word responsible for servicing the > interrupts. This word would use the stacks of the current, random > task. If interrupt is associated with data transfer, then tha data > should be read/written to/from a global variable (or could it be a > task specific variable, if particular task registers the interrupt > routine?). > The question is if I can assume, that the latency associated with > waiting to the end of execution of current word will be sufficiently > small?
If it's more than a single machine instruction, the latency is longer than the approach I'm advocating. Obviously, it depends on the word being executed, which could be anywhere from a microsecond to much, much longer. In most real-time applications, you can't afford that level of uncertainty for the time-critical part of an interrupt handler. Our approach separates the time-critical response from the higher-level processing, which can certainly take place in high-level Forth, at the task level. Consider the data acquisition situation. An interrupt signals the presence of a value on the A/D. The interrupt code reads the value, stores it in a buffer the application has provided, and sets up the next conversion. It then sets the flag that enables the task responsible for the data to wake up and process the value. This needn't take more than a very few instructions, needs no stacks, and (depending on the processor) may not even need any registers. The actual logic of processing the data may be anywhere from simple to complex. If the data is coming in faster than the task can process it, it can go into a buffer. That adds management of buffer pointers to the ISR, but that isn't hard, either. Cheers, Elizabeth -- ================================================== Elizabeth D. Rather (US & Canada) 800-55-FORTH FORTH Inc. +1 310.999.6784 5959 West Century Blvd. Suite 700 Los Angeles, CA 90045 http://www.forth.com "Forth-based products and Services for real-time applications since 1973." ==================================================
On 5/22/11 1:25 AM, Albert van der Horst wrote:
> In article<Hr-dneayMvMBl0XQnZ2dnUVZ_vmdnZ2d@supernews.com>, > Elizabeth D Rather<erather@forth.com> wrote:
...
>> The problem with writing your interrupt handler in high-level Forth is >> that the interrupt must first instantiate a Forth environment: stacks, >> registers, whatever the implementation requires, and do so in such a way >> as to not affect whatever task was running when the interrupt occurred. >> This adds some inevitable overhead. > > Isn't this overly pessimistic? Compared to interrupting an > arbitrary program (and provided there is one program space) > you need to save two less registers, containing the return > stack pointer and the data stack pointer. Instead, use them, > which saves initialisation. They must balance out on return. > This assumes that the Forth uses its stacks properly, i.e. no data is > stored outside of the area "protected" by the data stack pointer. 1) > For the rest the usual caveat's for interrupts apply: save all registers > that you use.
In the first place, the interrupt routine needs to save *only* the registers that it uses, which may be as few as none, depending on the processor and what you have to do. And I agree that you need to leave the processor exactly as you found it. But you're missing the point that to run high-level Forth you need more things than two stack pointers. There are other registers, user variables, and other resources. This makes instantiating a Forth environment costlier than simply branching to a few machine instructions. Cheers, Elizabeth -- ================================================== Elizabeth D. Rather (US & Canada) 800-55-FORTH FORTH Inc. +1 310.999.6784 5959 West Century Blvd. Suite 700 Los Angeles, CA 90045 http://www.forth.com "Forth-based products and Services for real-time applications since 1973." ==================================================
Arlet Ottens  wrote:
>I presume that there is some sort of 'current task' that points to the >task currently executing, and that you can find the current stack from >there.
Not necessarily. In the system I am currently working on, breaking at ramdom many times puts me in the kernel code, in the middle of switching staks between tasks.
> Alternatively, use a dedicated interrupt stack.
Thta's always a cleaner solution. -- Roberto Waltman [ Please reply to the group, return address is invalid ]
On May 22, 7:25=A0am, Albert van der Horst <alb...@spenarnc.xs4all.nl>
wrote:
> Isn't this overly pessimistic?
The thing about this discussion that bothers me is talking about interrupts in the abstract. But in my experience how you handle interrupts depends entirely on what is causing the interrupt. There is a huge difference between an interrupt indicating the user pressed a button on a panel versus an interrupt coming in from an Ethernet controller receiving packets at full speed. And there is a huge difference between a RTC interrupt ticking away every second versus driving a matrix of LEDs that you're pulse-width modulating brightness at 1kHz. Sometimes an interrupt comes in from human-speed events and those are so slow you don't care if it takes a few milliseconds here or there. Other times not responding to an interrupt fast enough means you're not going to be transferring data fast enough on a channel or that small perturbations in your timing will be easily detected by human senses. So I think we need to factor at least three things into this discussion: 1. How quickly you need to react to an interrupt. 2. How many cycles your interrupt service routine will take. 3. The rate of those interrupts. Yeah, the interrupt mechanism is exactly the same, but the question of overhead starts to matter more. Personally, I do tend to code my ISRs in a high-level language. But I take a "trust but verify" approach and look at the quality of the code generated. And on critical interrupts, you know I'm going to be instrumenting my code and hooking up a 'scope to measure exactly how much time (and what percentage of time that is from the larger application) is taken. These things matter, and regardless of how one chooses to implement an ISR, you better have a very good understanding of the overheads, resources used, and other limitations.
On 5/21/11 11:46 PM, Arlet Ottens wrote:
> On 05/22/2011 10:15 AM, Elizabeth D Rather wrote: > >>> >>>> To a limited extent you can get away with pushing something temporarily >>>> on whatever is the current data stack, provided you get it off again >>>> before exiting the ISR. But that's a far cry from executing high level >>>> Forth. >>> >>> Why for a limited extent ? And of course you have to pop everything from >>> the stack that you put on. >> >> My point is that all of what you describe adds unnecessary overhead. If >> you follow the model I described above, you only need a few instructions >> in your ISR (typically 3-5 for a simple "read a value and put it >> somewhere"), and it would cost more than that just to get into a >> high-level Forth routine. Much easier to just do the bare minimum at >> interrupt time and let the task responsible for the interrupting device >> do the high level processing. > > If you defer the work to a task, you have the extra overhead of context > switches, which are generally more expensive than saving state in the > ISR. If you can do all the work in the ISR, or at least do all the work > in the majority of cases, this actually costs less overall.
In my experience, there's usually some component of the response that's time-critical and some that isn't. In the case of a clock tick, that event has to be registered immediately, but there isn't any further processing to occur. In the case of receipt of an interrupt that signals the completion of a process that some task has been waiting for (e.g. a disk read) notifying the task is all that has to be done, and *all* the subsequent activity is at the task level. My objective is to handle the time-critical steps, whatever they are, *without delay*, which means without scheduling a task or instantiating a context for high-level Forth (and I agree with Stephen that trying to do that for an ITC Forth makes no sense at all). There is no overhead involved in entering an ISR in our systems. In a multitasking Forth, context switches are occurring, but (at least in the implementations I'm familiar with) they are vastly less costly than in more conventional multi-tasked executives: typically on the order of half-dozen machine instructions to activate a task and fewer to de-activate one. Cheers, Elizabeth -- ================================================== Elizabeth D. Rather (US & Canada) 800-55-FORTH FORTH Inc. +1 310.999.6784 5959 West Century Blvd. Suite 700 Los Angeles, CA 90045 http://www.forth.com "Forth-based products and Services for real-time applications since 1973." ==================================================
On May 22, 8:41=A0pm, Elizabeth D Rather <erat...@forth.com> wrote:

> If it's more than a single machine instruction, the latency is longer > than the approach I'm advocating. =A0Obviously, it depends on the word > being executed, which could be anywhere from a microsecond to much, much > longer. =A0In most real-time applications, you can't afford that level of > uncertainty for the time-critical part of an interrupt handler. > > Our approach separates the time-critical response from the higher-level > processing, which can certainly take place in high-level Forth, at the > task level. >
Well, so this is an approach which is used in interrupt handlers e.g. in Linux - servicing of interrupt is split between the ISR which runs as fast as possible and handles most critical actions related to interrupt (e.g. copying data from hardware register, where they could be overwriten by the next received data, to the buffer), and the deferred interrupt routine ("bottom half", "tasklet", "workqueue" whatever implementation will be used). What I wanted to achieve was to give my students a way to experiment with LPC1769 based system, including interrupts, without continuous reflashing of CPU. The Riscy Pygness Forth seems to be an ideal solution, however I need a way to handle interrupts completely on the Forth level. Additionally I'd like to hide as few details of interrupt handling as possible. Therefore approach with blocking of interrupts in ISR and passing control to the high level Forth word (which may be defined by user in interactive session) seemed very good. As this will be used mainly for didactical tasks, reasonable decrease of performance may be accepted. -- Regards, Wojtek
On May 22, 2:41=A0pm, Elizabeth D Rather <erat...@forth.com> wrote:
> On 5/22/11 4:47 AM, wzab wrote: > > > Thanks for all replies. > > I have rather small experience with writing Forth programs, but quite > > good in writing Linux drivers in C. > > Usually the interrupt is not handled in context of particular task. It > > simply uses the protected mode stack of task which was runing when > > interrupt occured. > > Could it be done in similar way in Forth? > > Yes, that's essentially how we do it, except that some processors > (particularly low-end embedded microcontrollers) don't have a "protected > mode". > > > When interrupt occurs in the assembly language we simply switch > > interrupts off (as the hardware source of interrupt may be still > > active) and set a global flag informing that the interrupt is pending, > > thets all. > > Then the inner Forth interpreter after completion of the current > > word(?) should execute the Forth word responsible for servicing the > > interrupts. This word would use the stacks of the current, random > > task. If interrupt is associated with data transfer, then tha data > > should be read/written to/from a global variable (or could it be a > > task specific variable, if particular task registers the interrupt > > routine?). > > The question is if I can assume, that the latency associated with > > waiting to the end of execution of current word will be sufficiently > > small? > > If it's more than a single machine instruction, the latency is longer > than the approach I'm advocating. =A0Obviously, it depends on the word > being executed, which could be anywhere from a microsecond to much, much > longer. =A0In most real-time applications, you can't afford that level of > uncertainty for the time-critical part of an interrupt handler. > > Our approach separates the time-critical response from the higher-level > processing, which can certainly take place in high-level Forth, at the > task level. > > Consider the data acquisition situation. =A0An interrupt signals the > presence of a value on the A/D. =A0The interrupt code reads the value, > stores it in a buffer the application has provided, and sets up the next > conversion. =A0It then sets the flag that enables the task responsible fo=
r
> the data to wake up and process the value. =A0This needn't take more than > a very few instructions, needs no stacks, and (depending on the > processor) may not even need any registers. > > The actual logic of processing the data may be anywhere from simple to > complex. =A0If the data is coming in faster than the task can process it, > it can go into a buffer. =A0That adds management of buffer pointers to th=
e
> ISR, but that isn't hard, either. > > Cheers, > Elizabeth
I have to say I am lost. The description you give above only describes the division between the ISR and the non-ISR code for dealing with an ADC. I don't see anything that relates to the issue of implementing the ISR in Forth. Such a simple ISR can use the stack of what ever process it interrupted, as you say, as long as it doesn't need to store anything on the stack. I expect that is a given and the ISR would use static variables for such items. You keep referring to the "overhead" of setting up a Forth environment. I guess I don't have the insight into a complex Forth that would work that way. I am picturing a very simple Forth that has a single task of main line code and interrupts. I don't know what would be required for multitasking other than multiple stacks. I'm not sure of the capabilities of Riscy Pygness. Do you really think this is such an absolute that ISR in Forth is universally a bad idea? Rick
On May 22, 2:48=A0pm, Elizabeth D Rather <erat...@forth.com> wrote:
> On 5/22/11 1:25 AM, Albert van der Horst wrote: > > > > > In article<Hr-dneayMvMBl0XQnZ2dnUVZ_vmdn...@supernews.com>, > > Elizabeth D Rather<erat...@forth.com> =A0wrote: > ... > >> The problem with writing your interrupt handler in high-level Forth is > >> that the interrupt must first instantiate a Forth environment: stacks, > >> registers, whatever the implementation requires, and do so in such a w=
ay
> >> as to not affect whatever task was running when the interrupt occurred=
.
> >> =A0 This adds some inevitable overhead. > > > Isn't this overly pessimistic? Compared to interrupting an > > arbitrary program (and provided there is one program space) > > you need to save two less registers, containing the return > > stack pointer and the data stack pointer. Instead, use them, > > which saves initialisation. They must balance out on return. > > This assumes that the Forth uses its stacks properly, i.e. no data is > > stored outside of the area "protected" by the data stack pointer. 1) > > For the rest the usual caveat's for interrupts apply: save all register=
s
> > that you use. > > In the first place, the interrupt routine needs to save *only* the > registers that it uses, which may be as few as none, depending on the > processor and what you have to do. =A0And I agree that you need to leave > the processor exactly as you found it. > > But you're missing the point that to run high-level Forth you need more > things than two stack pointers. =A0There are other registers, user > variables, and other resources. =A0This makes instantiating a Forth > environment costlier than simply branching to a few machine instructions.
Ah, now I get it. You are thinking in terms of the FULL Forth environment. That is not needed in an ISR. Typically and ISR for an embedded application only needs to do some basic memory access and IO. If you are talking about a UNIX system, then maybe more is required. But for the OP, clearly only a subset is needed that does not require that level of overhead. Rick