Forums

Interrupts

Started by nathan_b_a July 7, 2005


David Kelly wrote:
> On Jul 8, 2005, at 6:36 PM, Gordon Couger wrote: >>If he wants to go in it seriously I don't think the 68CH11 is
>>the palace to start. It is used a lot in teaching because they
>>have them, it is a nice clean CPU with a rich set of features
>>and it serves the use of teaching very well. > Its so refreshing to see others share the same opinion as myself,
> "... it serves the use of teaching..." That the purpose of education
> is not to drill repetitive skills but to teach one how to solve
> problems and learn to self-teach. That too many schools "teach"
> Microsoft Word version 3.14159 and graduates think they have "learned
> computers" when they haven't learned anything but rote drill using a
> specific application.
>
> About 8 years ago I got placed on the interview list to grill
> potential new hires. Was shocked to find so-called CS graduates had
> not operated a raw C compiler or written a Makefile. Graduated with a
> B.C.S. and never ventured outside of Microsoft Visual Suite. All got
> solid F's on my report. The opening was for a Unix System
> Administrator. I had to try to explain some facts of life to
> Personnel that those candidates should not have received invitations.
>
> Had any candidate indicated they understood the whole world was not
> Microsoft Visual Suite, then they could have earned a "C" on their
> interview from me.
>
> Agree the HC11 is a good platform to introduce students to
> microcontrollers. Its not so good for ease of using the hardware. The
> famous Pink book is an exceptionally excellent reference.
>
> Have been using Atmel Atmega64 lately. Its features make the hardware
> much easier to construct and cheaper than the HC11. Its documentation
> lacks the clarity and organization found in the Pink book.
>
> --
> David Kelly N4HHE, dkelly@dkel...

I always cringe when I see and add for c++ programmer for 68HC11
or 8051. Someone doesn't understand what their doing.

I worked for the university Ag Engineering department as
programmer in a machine shop envromnemt. It is one of the shops
in the country that can take an idea actually put it in
production. A prof would hire a CS grad now and then and they
were always hard to train. I would much rather teach an engineer
to program than a computer science student to solve problems and
to write fast code.

I would still use a t 68HC11 for a one off if I had code that
was close to what I needed. But fast Apple II is not what we
need to be using in the embedded world now.

I think I would choose a 68HC11 to teach a class with unless I
was going to to teach advanced networking and then it would need
CAN and 68HC12 would do that. But that would be a coarse of its own.

Gordon

Gordon


On Fri, 2005-07-08 at 20:05 -0500, Mike McCarty wrote:

> I wonder where the original poster went in all the dust?

It appears (he's told me,) he's been intensively researching the
PICAXE.

Over here, it's far easier to pick up than anything HC11, HC12 or such
- it's the current darling of the local electronics magazine (I would
say "magazines", but they all basically coalesced into one), and kits
are available from the local "hobby" retailers.

> I agree New Micros is great.

Mind you, I too still have an NMI system of great antiquity, unused
after 10 years or so. I'm about to locate it soon - major house
upheaval. Its main lack for my purpose (house controller) is of all
things - a RTC.

The PICAXE is within its limits, a dev system on a chip - like a
Rabbit but less sophisticated (and much, much cheaper). It has an
interpreter (token code), doesn't run particularly fast, but contains
all the primitives for debounce, PWM, analog estimation, timing, LCD,
I²C & DOW (I think), so for simple logic replacement and rapid one-off
development, it is very effective. The version likely useful in this
case is an 8-pin chip - that is, 5 or 6 I/O, retailing about A$4. Since
the concept is to use virtually *no* "glue" logic, it works quite well
on Vero or a "breadboard" PCB which simply expands DIL to pads.

Low-end chips can self-clock with modest stability, larger ones can or
must use one or two crystals. They are in fact simply F-series PICs
programmed with the interpreter, interpreting token code from EEPROM
which can alternatively store data. One is warned not to put them in a
"real" PIC programmer and erase them because you lose the interpreter
(irretrievable because it is proprietary)!

Not quite the same as running FORTH, to which you can talk with a
terminal emulator (or straight terminal).

Speaking of interrupts, and echoing all previously written, it might
seem that instructors fail to make it clear that interrupts are actually
for interfacing *not* to the "real world" at all, but to logic chips
(generally: peripherals, and *only* through these to the "real world").
Obviously, this needs to be stressed. Very few "real world" devices
indeed have the (time) resolution to deserve service at interrupt level.

Students, having been taught that interrupts provide "fast" service,
are obviously not in a mindset to conceptualise that "fast" to the
microprocessor, is nearly a million times what it means in human terms -
because that is how fast the MCU cycles.

--
Cheers,
Paul B.


Hi Paul,

In the prototype of www.greenseeker.com interutpts detected
ground seed by using the pulse accumlator on 68HC11 to count the
pulses off a shaft encoder tied to a wheel with a timeing belt.

I built an 10 x 32 foot X, Y, Z scanner used a line of laser
light at 45 degrees to the camera axis to scan a soil table with
an arifical rain maker in New Micros FORTH. It used 3 input
caputure interupts to cange one of 3 differnt stacks to keep
track of the position in space. The fellow before me wrongly
assumed that he had bigenough stepper motors to move the
reliably where he told it to go. He installed a stepper dirver
card in a PC to a power supply with 40 foot of 14 guage wire to
the stepper motors,

I moved the power supply to the carrage with in 7 foot of the
stepper moters and used 10 gauge Leitz wire for added flexibity.
I put an opto isolated 68HC11 bord on the power supply and
commuicated with it via the RS232. Insted of telling it how many
steps to take I told it the postiion to go to. The boar set a
goal for the new postion from its present postion and ramped up,
down, and if it was off jogged into positon.

It worked well when we used the standard scan of 1 cm accross
the plot and 20 cm down slope that had been carried over from
the days it was done by hand. Tough the accross slope resouliion
was a great deal higher. When the project got a new PI that
didn't understand that laws of physics applied to him he decided
he wanted it on a 1 cm x 1 cm basis. A device that was made
for attended operaton suddenly needed to run 20 times longer. It
took 7 seconds for the camera to stop shaking no matter what we
did.Hi Paul,

In the prototype of www.greenseeker.com interutpts detected
ground seed by using the pulse accumlator on 68HC11 to count the
pulses off a shaft encoder tied to a wheel with a timeing belt.
It also can get its timing intrerupt for heart beat from the CPU
or GPS.

I built an 10 x 32 foot X, Y, Z scanner used a line of laser
light at 45 degrees to the camera axis to scan a soil table with
an artificial rain maker in New Micros FORTH. It used 3 input
capture interrupts to change one of 3 different stacks to keep
track of the position in space. When and interrupt occurred the
interrupt routine blocked interprets changed the stack pointer,
saved the resisters and start adding or subtracting counts from
it and restored things on the way out. I don't remember how
many machine cycles it saved but it was enough to make it worth
while so I didn't get two interrupts from the other shaft
encoders and loose one. All that was left on the stack was the
absolute count from the shaft encoder from 0,0,0. The camera arm
had a stop on the lower end so it could be raised at any time
and the position recovered because it fell to the ground if the
power failed and that was 2 or 3 times a week in the spring.

The fellow before me wrongly assumed that he had big enough
stepper motors to move the reliably where he told it to go. They
were almost big enough when it was new if he had not installed a
stepper diver card in a PC to a power supply with 40 foot of 14
gauge wire to the stepper motors,

I moved the power supply to the carriage with in 7 foot of the
stepper motors and used 10 gauge Leitz wire for added flexibly.
If the wire was any thicker the mother would not move smoothly.
I put an opto isolated 68HC11 board on the power supply not as
close to the stepper motor as they could be and communicated
with it via the RS232. Instead of telling it how many steps to
take I told it the position to go to. The board set a goal for
the new position from its present positron and ramped up, down,
and stomped. If it was off jogged into positron.

It worked well when we used the standard scan of 1 cm across the
plot and 20 cm down slope that had been carried over from the
days it was done by hand. Tough the across slope resolution was
a great deal higher than when done by hand. When the project got
a new PI that didn't understand that laws of physics applied to
him he decided he wanted it on a 1 cm x 1 cm basis. A device
that was made for attended operation suddenly needed to run 20
times longer. It took 7 seconds for the camera to stop shaking
no matter what we did.

It went a lot slower than planed because it required an method
of using a computer to find the path water flowed over land and
no one had done that before. I finally got a hint in
Mathematical Recitations in Scientific American that mazes could
be solved by starting at an opening every opening on the edge
and working the way to each and ever blind end of the maze or
out of it. So I tuned the data into a maze by sorting it for
elevation and starting with the highest point and when I hit a
dead end mark the path back until came to anther path or ran off
the plot. Then I started at each and every point around the edge
and recursively climbing to the to each and every dead end.
When the algorithm failed I started collecting and writing data.

The project had set idle for over a year and it was made of wood
and not concrete so it moved as time went on. After a rain fall
event it took two months to dry out and all the water had to
evaporate out the top of the soil. That concentrates the salts
in surface and after 3 runs the soil resembles concrete more
than soil. When we rebuilt it for the new PI we put in Roots
blower or super charger and could dry it out in a week.

But by thin it was failing faster than we could fix and new PI
had spent all the money on computers for his lab and office.

The one thing we did find out for sure is if the water pools for
any reason the stream will be drawn though the depression. There
were 5 drips that they never stopped and the streams developed
with in 20 cm of the same palace on 12 runs with 3 soil changes
on the lower half of the table.

The moral of this tail is to get a firm commitment for the
resources needed to complete the project when a new guy comes in
you don't. I still like the guy real well but I should have quit
the project as soon as was sure it would fail and not have tried
to make it work once I was sure I couldn't do it. In spite of
the fellow that was running the project, my boss and the
department head. At the time I could have got a job several
places on campus or gone to much better paying job but I am very
sure now and I was pretty sure then that ever one but the PI on
the project would have respected my choice. Instead I waste a
year. Don't work on project that won't work.

Your right about interfacing with the outside world. almost
everything needs something between it and the CPU if for nothing
else but act as fuse when transient get on the line. Long runs
of wire are also known as antennas and pick up a lot of energy
if lightning shrikes near by or some one keys up an 100 watt
radio near by. They have that petty well under control But there
was Porsche that you could kill the engine with a 100 watt ham
radio sitting at the stop light beside it.

If you have problems with electrical noise on long runs of wire
for data consider using voltage to frequency chips and opto
isolators. If its really bad fiber optic cable always works.
Then run the computer on batteries and it really hard for any
outside source to damage it. But you can easily double the cost
of the system and greatly increase the maintenance cost.

Gordon
Gordon Couger
Stillwater, OK
www.couger.com/gcouger

Paul B. Webster VK2BZC wrote:
> On Fri, 2005-07-08 at 20:05 -0500, Mike McCarty wrote: >>I wonder where the original poster went in all the dust? > It appears (he's told me,) he's been intensively
researching the
> PICAXE.
>
> Over here, it's far easier to pick up than anything HC11,
HC12 or such
> - it's the current darling of the local electronics magazine
(I would
> say "magazines", but they all basically coalesced into one),
and kits
> are available from the local "hobby" retailers. >>I agree New Micros is great. > Mind you, I too still have an NMI system of great
antiquity, unused
> after 10 years or so. I'm about to locate it soon - major house
> upheaval. Its main lack for my purpose (house controller) is
of all
> things - a RTC.
>
> The PICAXE is within its limits, a dev system on a chip -
like a
> Rabbit but less sophisticated (and much, much cheaper). It
has an
> interpreter (token code), doesn't run particularly fast, but
contains
> all the primitives for debounce, PWM, analog estimation,
timing, LCD,
> I²C & DOW (I think), so for simple logic replacement and
rapid one-off
> development, it is very effective. The version likely useful
in this
> case is an 8-pin chip - that is, 5 or 6 I/O, retailing about
A$4. Since
> the concept is to use virtually *no* "glue" logic, it works
quite well
> on Vero or a "breadboard" PCB which simply expands DIL to pads.
>
> Low-end chips can self-clock with modest stability, larger
ones can or
> must use one or two crystals. They are in fact simply
F-series PICs
> programmed with the interpreter, interpreting token code from
EEPROM
> which can alternatively store data. One is warned not to put
them in a
> "real" PIC programmer and erase them because you lose the
interpreter
> (irretrievable because it is proprietary)!
>
> Not quite the same as running FORTH, to which you can talk
with a
> terminal emulator (or straight terminal).
>
> Speaking of interrupts, and echoing all previously written,
it might
> seem that instructors fail to make it clear that interrupts
are actually
> for interfacing *not* to the "real world" at all, but to
logic chips
> (generally: peripherals, and *only* through these to the
"real world").
> Obviously, this needs to be stressed. Very few "real world"
devices
> indeed have the (time) resolution to deserve service at
interrupt level.
>
> Students, having been taught that interrupts provide "fast"
service,
> are obviously not in a mindset to conceptualise that "fast"
to the
> microprocessor, is nearly a million times what it means in
human terms -
> because that is how fast the MCU cycles.
>


It went a lot slower than pland becuse it required an method of
using a computer to find the path water flowed over land and no
one had done that before. I finaly got a hint in Mathamtical
Rectations in Scintific American that mazes could be solved by
srarting at an opening a the edge and working down hill to the
lowest the way to each and ever blind end of the maze or out of.
So I tuned the data into a maze by sroting it for elevaton and
startng with the highest point and when I hit a dead end mark
the path back until came to anthoer path or ran off the plot.
Then I started at each and every point around the edge and
recuively climbing to the to

Paul B. Webster VK2BZC wrote:
> On Fri, 2005-07-08 at 20:05 -0500, Mike McCarty wrote: >>I wonder where the original poster went in all the dust? > It appears (he's told me,) he's been intensively researching the
> PICAXE.
>
> Over here, it's far easier to pick up than anything HC11, HC12 or such
> - it's the current darling of the local electronics magazine (I would
> say "magazines", but they all basically coalesced into one), and kits
> are available from the local "hobby" retailers. >>I agree New Micros is great. > Mind you, I too still have an NMI system of great antiquity, unused
> after 10 years or so. I'm about to locate it soon - major house
> upheaval. Its main lack for my purpose (house controller) is of all
> things - a RTC.
>
> The PICAXE is within its limits, a dev system on a chip - like a
> Rabbit but less sophisticated (and much, much cheaper). It has an
> interpreter (token code), doesn't run particularly fast, but contains
> all the primitives for debounce, PWM, analog estimation, timing, LCD,
> I²C & DOW (I think), so for simple logic replacement and rapid one-off
> development, it is very effective. The version likely useful in this
> case is an 8-pin chip - that is, 5 or 6 I/O, retailing about A$4. Since
> the concept is to use virtually *no* "glue" logic, it works quite well
> on Vero or a "breadboard" PCB which simply expands DIL to pads.
>
> Low-end chips can self-clock with modest stability, larger ones can or
> must use one or two crystals. They are in fact simply F-series PICs
> programmed with the interpreter, interpreting token code from EEPROM
> which can alternatively store data. One is warned not to put them in a
> "real" PIC programmer and erase them because you lose the interpreter
> (irretrievable because it is proprietary)!
>
> Not quite the same as running FORTH, to which you can talk with a
> terminal emulator (or straight terminal).
>
> Speaking of interrupts, and echoing all previously written, it might
> seem that instructors fail to make it clear that interrupts are actually
> for interfacing *not* to the "real world" at all, but to logic chips
> (generally: peripherals, and *only* through these to the "real world").
> Obviously, this needs to be stressed. Very few "real world" devices
> indeed have the (time) resolution to deserve service at interrupt level.
>
> Students, having been taught that interrupts provide "fast" service,
> are obviously not in a mindset to conceptualise that "fast" to the
> microprocessor, is nearly a million times what it means in human terms -
> because that is how fast the MCU cycles.
>




Paul B. Webster VK2BZC wrote:

>On Fri, 2005-07-08 at 20:05 -0500, Mike McCarty wrote: [snip]

>>I agree New Micros is great.
>>
>>
>
> Mind you, I too still have an NMI system of great antiquity, unused
>after 10 years or so. I'm about to locate it soon - major house
>upheaval. Its main lack for my purpose (house controller) is of all
>things - a RTC.
They make several with a RTC. My NMIY-020 has one on it. In any case,
single-chip '11 designs can be *really* easy. I did an expanded mode
in an afternoon with 32K RAM, 32K EEPROM (with 8K blocks
disableable) and RS-232 I/F on board. So the '11 isn't particularly
difficult to design and build with. I just used point-to-point on a
vectorboard.

> The PICAXE is within its limits, a dev system on a chip - like a
>Rabbit but less sophisticated (and much, much cheaper). It has an
>interpreter (token code), doesn't run particularly fast, but contains
>all the primitives for debounce, PWM, analog estimation, timing, LCD,
>I²C & DOW (I think), so for simple logic replacement and rapid one-off
>development, it is very effective. The version likely useful in this
>case is an 8-pin chip - that is, 5 or 6 I/O, retailing about A$4. Since
>the concept is to use virtually *no* "glue" logic, it works quite well
>on Vero or a "breadboard" PCB which simply expands DIL to pads.
Sounds like a good deal. Something like the Stamp. It's hard to beat the A$4
(that would be about $2 USD, IIRC) price. Especially for newbies, having
debounce already done lets one get on with designing the "guts" of the
project.

I like the '11 partly because I have about 100 of them lying around, many
with BUFFALO in them.

> Low-end chips can self-clock with modest stability, larger ones can or
>must use one or two crystals. They are in fact simply F-series PICs
>programmed with the interpreter, interpreting token code from EEPROM
>which can alternatively store data. One is warned not to put them in a
>"real" PIC programmer and erase them because you lose the interpreter
>(irretrievable because it is proprietary)!
Oh, so it *is* a Stamp, more or less.

> Not quite the same as running FORTH, to which you can talk with a
>terminal emulator (or straight terminal).
<shudder> Forth, yes, Forth, I recall that language....

> Speaking of interrupts, and echoing all previously written, it might
>seem that instructors fail to make it clear that interrupts are actually
>for interfacing *not* to the "real world" at all, but to logic chips
>(generally: peripherals, and *only* through these to the "real world").
>Obviously, this needs to be stressed. Very few "real world" devices
>indeed have the (time) resolution to deserve service at interrupt level.
Most never discuss real time at all. I'm pretty disgusted by the recent
crops
coming from, yes even Grad School. Can you believe it? A job applicant
with a Masters of Science degree who doesn't know what "debounce" means.

I've had discussions on a couple of newsfeeds where people don't even
understand why an interrupt is undesirable on real time systems. Polling
is always the way to go if you can do it. An interrupt is sometimes just the
right thing. But it should do as little as possible, and then schedule a
task
to handle the rest. Otherwise, the scheduler is not in control of the CPU
enough of the time to make sure deadlines get met.

> Students, having been taught that interrupts provide "fast" service,
>are obviously not in a mindset to conceptualise that "fast" to the
>microprocessor, is nearly a million times what it means in human terms -
>because that is how fast the MCU cycles. >
Yes, even a very slow processor is very fast by human standards. Like
my '11 projects. I have a 2MHz processor (8MHz clock). Most instructions
take 2 - 5 cycles. S-L-O-W by today's standards. But think in human terms,
most instructions take 1 - 2.5 MICROseconds.

Mike

--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
This message made from 100% recycled bits.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!



On Jul 9, 2005, at 9:39 AM, Mike McCarty wrote:

> <shudder> Forth, yes, Forth, I recall that language....

Forth is absolutely brilliant. With very little assembly one can
create a Forth system on-chip to develop code self-hosted on the target.

I wouldn't recommend building new using Forth, but it is one of those
fields of study any well-rounded engineer working in the embedded
market should spend 3 months to understand inside and out.

One of the biggest problems with Forth is that its a "write only"
language. Its very hard to write Forth that one can read and
understand a month or year later.

About 10 years ago I had several NMI HC11 Forth boards to use for
prototyping. Was planning on using a Benchmarq RTC module but the
data sheet left some timing questions unanswered and the phone tech
support repeatedly insisted his answer was correct, but it was stated
in such a way it was clear he didn't understand the question. So in
less time than I spent on the phone I wired one up several different
ways on the proto space on an NMI board. Then in exercised the module
in Forth and got my answers.

Sure enough, the order of certain signals didn't matter so long as
enough time lapsed from the last before expecting data valid. Product
has been in production for over 10 years now without an issue.

--
David Kelly N4HHE, dkelly@dkel...
========================================================================
Whom computers would destroy, they must first drive mad.


On Sat, 2005-07-09 at 11:38 -0500, David Kelly wrote:
> Forth is absolutely brilliant. With very little assembly one can
> create a Forth system on-chip to develop code self-hosted on the target.

It is as well to remember for what FORTH was "invented". It was for
real-time control of (electro-)mechanical systems. For that function,
it remains superb, though on a fully-fledged PC-based system, you would
no doubt use Python.

FORTH - on a microcontroller - requires certain primitives to run
efficiently (yes, I know people have fudged it on a PIC, and it worked
OK on a 6502!). It happens that those primitives are present on the
6809/ 68HC11, so those processors are perfect for implementing it (as is
the 80x86). Ergo, New Micros.

> I wouldn't recommend building new using Forth, but it is one of those
> fields of study any well-rounded engineer working in the embedded
> market should spend 3 months to understand inside and out.

OK, so you have all your hardware correctly(?) interfaced to your PC
or microcontroller, you run the smoke test, now, what other language
facilitates incremental testing of one part after another, thence
setting up of control subsystems and construction of a complete
application? There is virtually no compile cycle involved - it compiles
incrementally as you write and test the code.

> One of the biggest problems with Forth is that its a "write only"
> language. Its very hard to write Forth that one can read and
> understand a month or year later.

Actually, it's more *self*-documenting than many languages, and has
perfectly adequate provisions for commenting.

> About 10 years ago I had several NMI HC11 Forth boards to use for
> prototyping. ... Then in exercised the module in Forth and got my
> answers.

Exactly - that's how it works.

I probably *could* use something else (with difficulty), but still use
F-PC on a machine with a parallel port card (replaceable) to rig
experiments/ testing for various devices or assemblies - such as
displays, encoders, interface cards, keyboards ...
--
Cheers,
Paul B.


On Sat, 2005-07-09 at 09:39 -0500, Mike McCarty wrote:
> They make several with a RTC.

I'm sure they do - now, but it means I have to go and buy another.
That's the trick!

> Sounds like a good deal. Something like the Stamp.

It is, and it isn't a Stamp. It utilises the fact that the later PICs
have all the resources on-board, and far more resources than the old
Stamp. And since the Stamp code was proprietary, it's a(nother
proprietary) re-write of both the interpreter firmware, and the
development system. It certainly *works* like a Stamp.

> It's hard to beat the A$4 (that would be about $2 USD, IIRC) price.

As it currently goes, A$4 is almost exactly US$3, except for the
exchange and shipping costs, of course.

> Especially for newbies, having debounce already done lets one get on
> with designing the "guts" of the project.

Perhaps it's even more important that it is actually *documented* as
an essential consideration.

> An interrupt is sometimes just the right thing. But it should do as
> little as possible, and then schedule a task to handle the rest.
> Otherwise, the scheduler is not in control of the CPU enough of the
> time to make sure deadlines get met.

Alternately expressed, there is almost always something *else* that
was happening, that really needs to be finished before dealing with
"user" input, or a (slow) mechanical device.

--
Cheers,
Paul B.


Paul B. Webster VK2BZC wrote:

>On Sat, 2005-07-09 at 09:39 -0500, Mike McCarty wrote: >>They make several with a RTC.
>>
>>
>
> I'm sure they do - now, but it means I have to go and buy another.
>That's the trick!
I knew that, just letting the rest of the world know, as well. The NMIY-020
also has a fully debounced matrix keyboard interface for, IIRC, 16 buttons.

[snip]

[cocerning PICAXE]

>> Especially for newbies, having debounce already done lets one get on
>>with designing the "guts" of the project.
>>
>>
>
> Perhaps it's even more important that it is actually *documented* as
>an essential consideration.
Yes. I find that it is invariably the case that newbies connect a dry
contact
to an interrupt, press a button, and then wonder why they got a stack
overflow
or something wierd. I had a guy whose job it was to insert faults in a
memory
board tell me my driver didn't work for fault isolation/recovery when he
inserted a fault into the memory board. He put a dry contact across one of
the driver chips and pressed the button. My software declared that the
board had a completely unrecoverable error, and removed it from service,
switching to running simplex with its mate only, then began diags, and
then restored it to service after refilling content. His complaint was that
it should not have removed the board from service when he inserted
"only" one error. My driver counted more than 48 consecutive errors
on that board.

>
>
>>An interrupt is sometimes just the right thing. But it should do as
>>little as possible, and then schedule a task to handle the rest.
>>Otherwise, the scheduler is not in control of the CPU enough of the
>>time to make sure deadlines get met.
>>
>>
>
> Alternately expressed, there is almost always something *else* that
>was happening, that really needs to be finished before dealing with
>"user" input, or a (slow) mechanical device.
Yes. Or even a fast device. My point is that having execution contexts out
of control of the scheduler makes hitting deadlines difficult. This can be
due to any number of reasons. Your statement is nicely put, though. I like
it.

Mike

--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
This message made from 100% recycled bits.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!




Paul B. Webster VK2BZC wrote:
> On Sat, 2005-07-09 at 11:38 -0500, David Kelly wrote:
>
>>Forth is absolutely brilliant. With very little assembly one can
>>create a Forth system on-chip to develop code self-hosted on the target. > It is as well to remember for what FORTH was "invented". It was for
> real-time control of (electro-)mechanical systems. For that function,
> it remains superb, though on a fully-fledged PC-based system, you would
> no doubt use Python.
>
> FORTH - on a microcontroller - requires certain primitives to run
> efficiently (yes, I know people have fudged it on a PIC, and it worked
> OK on a 6502!). It happens that those primitives are present on the
> 6809/ 68HC11, so those processors are perfect for implementing it (as is
> the 80x86). Ergo, New Micros. >>I wouldn't recommend building new using Forth, but it is one of those
>>fields of study any well-rounded engineer working in the embedded
>>market should spend 3 months to understand inside and out. > OK, so you have all your hardware correctly(?) interfaced to your PC
> or microcontroller, you run the smoke test, now, what other language
> facilitates incremental testing of one part after another, thence
> setting up of control subsystems and construction of a complete
> application? There is virtually no compile cycle involved - it compiles
> incrementally as you write and test the code. >>One of the biggest problems with Forth is that its a "write only"
>>language. Its very hard to write Forth that one can read and
>>understand a month or year later. > Actually, it's more *self*-documenting than many languages, and has
> perfectly adequate provisions for commenting. >>About 10 years ago I had several NMI HC11 Forth boards to use for
>>prototyping. ... Then in exercised the module in Forth and got my
>>answers. > Exactly - that's how it works.
>
> I probably *could* use something else (with difficulty), but still use
> F-PC on a machine with a parallel port card (replaceable) to rig
> experiments/ testing for various devices or assemblies - such as
> displays, encoders, interface cards, keyboards ...

Having FORTH burned in ROM on a 68HC11 boar and the hardware
test programs in FORTH let find out if the problem is hardware
or software very quickly.

I have use FORTH to interface the first Intel CAN chip to a
Motorola bus, develop vehicle tracking software for a fellow
that knew noting about computer programming in Vancouver BC from
Oklahoma but we cloud use FORTH to debug problems and develop
feature on the pone. I did a similar project with a Chinese
programmer that barely spoke English and I speak no Chinese at all.

I don't think any of these projects would have been possible
with a compiled language with tools I had at the time.

I plan on using it a again to develop sepctophotmeter on New
micros Tiny Arm. I could do it in C but I have used GCC a lot
and once I have the interrupts worked out and and the timing and
other problems solve I will convert it to C and not spend all
the time in edit, save, compile, link, load, run crash cycle
that C takes. When I can edit one value and copy and paste a
few lines to the terminal program.

I have never looked at FORTH as a final product and I probably
should. But there is lot pressure for management against it
because there are so few of us that work with it. Since I wear
two hats I usually rewrite the program in C. As the baker told
me when I was trying to sell Radio Shack computers nobody got
fired for buying IBM and I don't have to defend my choice of
using C. But when start from scratch with a new CPU and new hard
were FORTH is a lot more productive than anything else.

Gordon