EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Interrupts: can be lost?

Started by pozz July 7, 2020
Il 10/07/2020 01:24, antispam@math.uni.wroc.pl ha scritto:
> [...] > So system must be correct by design. It is known how to do this.
How to do this? :-)
On Friday, July 10, 2020 at 2:01:49 AM UTC-4, Clifford Heath wrote:
> On 10/7/20 11:41 am, Rick C wrote: > > On Thursday, July 9, 2020 at 6:22:24 PM UTC-4, lasselangwad...@gmail.com wrote: > >> torsdag den 9. juli 2020 kl. 20.47.25 UTC+2 skrev Paul Rubin: > >>> Rick C <gnuarm.deletethisbit@gmail.com> writes: > >>>> I have yet to find a single ARM processor that had a true, pin > >>>> compatible second source. > >>> > >>> I thought there were some Chinese clones of the lower end STM32F series > >>> but I haven't paid close attention. > >>> > >> > >> yep, I know there are several clones of STM32F103xxxx > > > > So "clone" equals "second source"??? > > Not really. They haven't licensed ST's masks, they've just designed > their own chips to match the documented behaviour. > > The chips I know of are by GigiDevice and Mindmotion, part numbers > GD32F3??? and MM32F??? to match the STM32F???. > > I see over 100 variants waiting on reels in JLCPCB's fab. > > Clifford Heath.
Ok, then they are pretty useless as second sources. I see someone mentioned bugs that are being worked out. Not encouraging. Is Gigidevice a company that makes FPGAs? Or am I mixing them up with someone else? Ah, I see I have downloaded their data sheets, so I guess I was looking at a low cost RISC-V board and also have info on their ARM ST clone chip. Wait, it's the same chip! No sign of FPGAs. That was one of the other companies like AGM, Anlogic or maybe Gowin. AGM has an interesting data sheet, but it's two years old and no sign of the device. -- Rick C. -+- Get 1,000 miles of free Supercharging -+- Tesla referral code - https://ts.la/richard11209
On Friday, July 10, 2020 at 2:09:39 AM UTC-4, pozz wrote:
> Il 10/07/2020 01:24, antispam@math.uni.wroc.pl ha scritto: > > [...] > > So system must be correct by design. It is known how to do this. > > How to do this? :-)
You are looking for an education on interrupt design in MCU systems? I can recommend a few good companies who teach this material. It is also covered in many text books. Once again, I will say, this is no small reason why I like FPGA design in HDL. It just doesn't have this sort of complexity. The only complexity is from the design itself, nothing added gratuitously. -- Rick C. -++ Get 1,000 miles of free Supercharging -++ Tesla referral code - https://ts.la/richard11209
On 10/07/2020 08:09, pozz wrote:
> Il 10/07/2020 01:24, antispam@math.uni.wroc.pl ha scritto: >> [...] >> So system must be correct by design.&#4294967295; It is known how to do this. > > How to do this? :-)
Roughly speaking, you have to know how often the different interrupts can occur, what priorities they have, how long you take to handle them - that kind of thing. And you have to calculate the worst-case scenarios so that you can guarantee that you don't lose (important) events if you are unlucky in the ordering and timing. As Waldek says, you generally can't check this sort of thing by testing as it is usually very difficult to make the timing in the test happen in exactly the right places. (It's like race conditions in multi-threading systems in that aspect.) If you use timing-driven polling, it is often easier to be sure of things because you have more regularity in the system. And if you use a microcontroller with appropriate peripherals for the task, it is also much simpler. Peripherals with buffers, DMAs, etc., that can handle at least a little of the timing autonomously means you have a lot more flexibility in how you need to react to events in software.
On 10/07/2020 03:40, Rick C wrote:
> On Thursday, July 9, 2020 at 2:47:25 PM UTC-4, Paul Rubin wrote: >> Rick C <gnuarm.deletethisbit@gmail.com> writes: >>> I have yet to find a single ARM processor that had a true, pin >>> compatible second source. >> >> I thought there were some Chinese clones of the lower end STM32F series >> but I haven't paid close attention. > > Yes, that is exactly what a company will be counting on for 20 years of support, Chinese clones. Are you actually reading the thread? > > >>> Not sure what you are trying to say about the tools. Anyone who >>> wishes to maintain a design for >10 years or even five needs to >>> archive the tools and and the machine they run on. I presently have >>> that problem. The tools from >10 years ago still seem to run on my >>> recent PC running Win10, but who knows what will happen next time I >>> update? I may need to resurrect a 15 year old desktop running Win2k. >> >> I think current practice is to run your system inside a VM that you >> snapshot so you can later reproduce it. In critical systems you do that >> whenever you make a software release, so you snapshot not only the >> current source code (revision control already does that), but also the >> complete tool chain including compilers, libraries, all the build >> artifacts, and the whole OS. > > You still need the OS and hardware interfaces. Can you still buy a copy of XP? >
Separate your parts into the bits that generate code, and the rest. The toolchain is important - the compiler, libraries and build-critical parts. The IDE is not critical, nor are debuggers, programmers, etc. I don't (usually) archive VM's - I archive folders with toolchains. You only need a VM if the tools won't run on later OS's (or DOSBox or Wine). It can be a lot more difficult if the toolchain has some kind of licensing and protection mechanism. You might have to archive physical computers, not just VM's, to get that all preserved. I don't believe you can still buy a license for XP. But an installation CD from long ago works perfectly well as an installation CD now (especially if it is an iso file for a VM...). You have IIRC 30 days to use it before the "activation" mechanism locks you out. (And there are ways around that too, if you want to go down that path.)
On Friday, July 10, 2020 at 3:09:15 AM UTC-4, David Brown wrote:
> On 10/07/2020 03:40, Rick C wrote: > > On Thursday, July 9, 2020 at 2:47:25 PM UTC-4, Paul Rubin wrote: > >> Rick C <gnuarm.deletethisbit@gmail.com> writes: > >>> I have yet to find a single ARM processor that had a true, pin > >>> compatible second source. > >> > >> I thought there were some Chinese clones of the lower end STM32F series > >> but I haven't paid close attention. > > > > Yes, that is exactly what a company will be counting on for 20 years of support, Chinese clones. Are you actually reading the thread? > > > > > >>> Not sure what you are trying to say about the tools. Anyone who > >>> wishes to maintain a design for >10 years or even five needs to > >>> archive the tools and and the machine they run on. I presently have > >>> that problem. The tools from >10 years ago still seem to run on my > >>> recent PC running Win10, but who knows what will happen next time I > >>> update? I may need to resurrect a 15 year old desktop running Win2k. > >> > >> I think current practice is to run your system inside a VM that you > >> snapshot so you can later reproduce it. In critical systems you do that > >> whenever you make a software release, so you snapshot not only the > >> current source code (revision control already does that), but also the > >> complete tool chain including compilers, libraries, all the build > >> artifacts, and the whole OS. > > > > You still need the OS and hardware interfaces. Can you still buy a copy of XP? > > > > Separate your parts into the bits that generate code, and the rest. The > toolchain is important - the compiler, libraries and build-critical > parts. The IDE is not critical, nor are debuggers, programmers, etc. I > don't (usually) archive VM's - I archive folders with toolchains. You > only need a VM if the tools won't run on later OS's (or DOSBox or Wine). > > It can be a lot more difficult if the toolchain has some kind of > licensing and protection mechanism. You might have to archive physical > computers, not just VM's, to get that all preserved. > > I don't believe you can still buy a license for XP. But an installation > CD from long ago works perfectly well as an installation CD now > (especially if it is an iso file for a VM...). You have IIRC 30 days to > use it before the "activation" mechanism locks you out. (And there are > ways around that too, if you want to go down that path.)
All vendor supplied tool chains for FPGAs have licensing. The only way to continue to use an old license is to continue to reset the date on the machine. Even if you have a perpetual license you have to renew the license file. I recall finding out about that on a CAE system from 40 years ago. I asked what happens if the company goes under and the guy just shrugged. I don't recall which company it was from, but it was on Apollo computers. Anyone remember them? I think they got bought by Sun, no? Looks like it was HP! -- Rick C. +-- Get 1,000 miles of free Supercharging +-- Tesla referral code - https://ts.la/richard11209
On 10/07/2020 10:41, Rick C wrote:
> On Friday, July 10, 2020 at 3:09:15 AM UTC-4, David Brown wrote: >> On 10/07/2020 03:40, Rick C wrote: >>> On Thursday, July 9, 2020 at 2:47:25 PM UTC-4, Paul Rubin wrote: >>>> Rick C <gnuarm.deletethisbit@gmail.com> writes: >>>>> I have yet to find a single ARM processor that had a true, >>>>> pin compatible second source. >>>> >>>> I thought there were some Chinese clones of the lower end >>>> STM32F series but I haven't paid close attention. >>> >>> Yes, that is exactly what a company will be counting on for 20 >>> years of support, Chinese clones. Are you actually reading the >>> thread? >>> >>> >>>>> Not sure what you are trying to say about the tools. Anyone >>>>> who wishes to maintain a design for >10 years or even five >>>>> needs to archive the tools and and the machine they run on. >>>>> I presently have that problem. The tools from >10 years ago >>>>> still seem to run on my recent PC running Win10, but who >>>>> knows what will happen next time I update? I may need to >>>>> resurrect a 15 year old desktop running Win2k. >>>> >>>> I think current practice is to run your system inside a VM that >>>> you snapshot so you can later reproduce it. In critical >>>> systems you do that whenever you make a software release, so >>>> you snapshot not only the current source code (revision control >>>> already does that), but also the complete tool chain including >>>> compilers, libraries, all the build artifacts, and the whole >>>> OS. >>> >>> You still need the OS and hardware interfaces. Can you still buy >>> a copy of XP? >>> >> >> Separate your parts into the bits that generate code, and the rest. >> The toolchain is important - the compiler, libraries and >> build-critical parts. The IDE is not critical, nor are debuggers, >> programmers, etc. I don't (usually) archive VM's - I archive >> folders with toolchains. You only need a VM if the tools won't run >> on later OS's (or DOSBox or Wine). >> >> It can be a lot more difficult if the toolchain has some kind of >> licensing and protection mechanism. You might have to archive >> physical computers, not just VM's, to get that all preserved. >> >> I don't believe you can still buy a license for XP. But an >> installation CD from long ago works perfectly well as an >> installation CD now (especially if it is an iso file for a VM...). >> You have IIRC 30 days to use it before the "activation" mechanism >> locks you out. (And there are ways around that too, if you want to >> go down that path.) > > All vendor supplied tool chains for FPGAs have licensing. The only > way to continue to use an old license is to continue to reset the > date on the machine. Even if you have a perpetual license you have > to renew the license file. I recall finding out about that on a CAE > system from 40 years ago. I asked what happens if the company goes > under and the guy just shrugged. I don't recall which company it was > from, but it was on Apollo computers. Anyone remember them? I think > they got bought by Sun, no? Looks like it was HP! >
In a perfect world (from my viewpoint!), development software doesn't need any kind of license renewal, internet connection, time-limitation, software lock, USB dongle, etc., to work. And it works on Linux with little or no requirements for particular distributions or non-default libraries. I have nothing against paying for software or a license - that's okay. I also don't mind paying yearly fees for updates. But when I buy the software, I want the software I bought to be useable from any system, at any time in the future. There can be /legal/ restrictions (single user, single company, single project, whatever), but I don't want /technical/ restrictions that limit my ability to use the software. I've dealt with enough crap from software locked to laptops that died, software that "phones home" to servers that no longer exist, dongles that don't work, etc. This kind of nonsense just makes honest users use workarounds. I have more than enough virtual machines with virtual network cards that all have the same MAC address, and have at least four ancient hard disks that all have the same serial number. And I have seen a computer with its time and date locked to an old date because the old license dongle worked but the new one did not. I don't always get things the way I want, but these things are a significant influence for my choice of tools, and therefore my choice of hardware. I've rejected microcontroller families solely on the basis of a lack of Linux support in their development tools. And if I do any more FPGA work, the way the license and "software protection" is handled will definitely be a factor in choosing devices and tools. (I don't claim it will be the overriding factor - there are bigger differences and fewer vendor choices in the FPGA world than in the microcontroller world.)
Un bel giorno pozz digit&ograve;:

> https://www.safetty.net/download/pont_pttes_2001.pdf > > Page 13 > Note carefully what this means! There is a common misconception among > the developers of embedded applications that interrupt events will never > be lost. This simply is not true. If you have multiple sources of > interrupts that may appear at &lsquo;random&rsquo; time intervals, interrupt > responses can be missed: indeed, where there are several active > interrupt sources, it is practically impossible to create code that will > deal correctly with all possible combinations of interrupts. > > Those words are incredible for me. I suspect I didn't get the point of > the author. I don't think that interrupt events on modern microntrollers > can be lost in some circumstances.
I don't know 8051, but every MCU I've ever used had interrupt flag registers. When an interrupt comes a flag is set, and then until you don't reset this flag (generally within the ISR) the subsequent interrupts on the same source are ignored. Therefore it is true that subsequent interrupts on the same source can be lost if they are too close [1]. But if the interrupts are different, this is not true. Two different interrupts events can come even at the exact same time, and both will be flagged. Of course they won't be serviced at the same time (unless you have a multicore CPU, maybe) but they will be "queued" and processed one at a time. [1] Actually I think that there are some MCUs that have a counter instead of a flag for each interrupt source, and therefore they can queue multiple interrupts of the same source before they are processed. Then each call of the ISR will decrement the counter. -- Fletto i muscoli e sono nel vuoto.
On 7/9/2020 21:04, Rick C wrote:
> On Thursday, July 9, 2020 at 8:16:18 AM UTC-4, David Brown wrote: >> On 08/07/2020 16:35, Rick C wrote: >>> On Wednesday, July 8, 2020 at 10:07:24 AM UTC-4, David Brown wrote: >>>> On 08/07/2020 16:01, Rick C wrote: >>>>> On Wednesday, July 8, 2020 at 7:57:43 AM UTC-4, Tauno Voipio >>>>> wrote: >>>>>> On 8.7.20 11.43, Niklas Holsti wrote: >>>>>>> On 2020-07-08 10:02, David Brown wrote: >>>>>>>> On 08/07/2020 00:38, pozz wrote: >>>>>>>>> https://www.safetty.net/download/pont_pttes_2001.pdf >>>>>>>>> >>>>>>>>> Page 13 Note carefully what this means! There is a >>>>>>>>> common misconception among the developers of embedded >>>>>>>>> applications that interrupt events will never be lost. >>>>>>>>> This simply is not true. If you have multiple sources of >>>>>>>>> interrupts that may appear at &lsquo;random&rsquo; time intervals, >>>>>>>>> interrupt responses can be missed: indeed, where there >>>>>>>>> are several active interrupt sources, it is practically >>>>>>>>> impossible to create code that will deal correctly with >>>>>>>>> all possible combinations of interrupts. >>>>>>>>> >>>>>>>>> Those words are incredible for me. I suspect I didn't get >>>>>>>>> the point of the author. I don't think that interrupt >>>>>>>>> events on modern microntrollers can be lost in some >>>>>>>>> circumstances. >>>>>>>> >>>>>>>> The book was written in 2001, using a microcontroller >>>>>>>> that, while very popular (and still not dead), was 20 years >>>>>>>> out of date at the time. I haven't looked at it yet, and >>>>>>>> can't judge the quality of the book. But beware that some >>>>>>>> things in it could be limited by that microcontroller >>>>>>>> architecture. >>>>>>>> >>>>>>>> However, the principle here is correct. It is part of a >>>>>>>> more general principle that is actually quite simple: >>>>>>>> >>>>>>>> If you have event counters or trackers that have limited >>>>>>>> capacity, and you do not handle the events before that >>>>>>>> capacity is exceeded, your system will lose events (by >>>>>>>> either ignoring new ones, or ejecting old ones). >>>>>>> >>>>>>> Indeed. >>>>>>> >>>>>>> Note that the book's author clearly has an axe to grind >>>>>>> here[1], because he advocates using "time triggering" instead >>>>>>> of event- (interrupt-) triggering. So he's trying to make you >>>>>>> afraid of interrupts. >>>>>>> >>>>>>> Of course a similar problem applies to time-triggered >>>>>>> systems: if you trigger a task every millisecond, but the >>>>>>> task sometimes needs more than a millisecond to complete... >>>>>>> something bad can happen. >>>>>>> >>>>>>> [1] Or, as is said here in Finland, "he has his own cow in >>>>>>> the ditch". >>>>>> >>>>>> >>>>>> The writer is advocating the 8051 family, which is awfully >>>>>> clumsy for running real multi-threading. Besides, it should be >>>>>> forgotten in favor of e.g. the Cortex-M processors, which are >>>>>> even at least as inexpensive as the 8051's with required >>>>>> support. >>>>>> >>>>>> (Mooo! from southern Finland). >>>>> >>>>> That would be nice in an ideal world. I know of one designer >>>>> that will still use the 8051 processor in designs that require a >>>>> second source and longevity of supply. When qualification of a >>>>> design is very expensive these properties can be very important. >>>>> If the 8051 is the only processor that meets these requirements >>>>> then that is the one you will use. >>>> >>>> Yes, but it is not the only option. >>>> >>>>> >>>>> There are many, many ARM MCUs on the market today. Which ones >>>>> will still be around in 20 years? >>>>> >>>> >>>> The ones that manufacturers say they will continue to produce. >>>> Any manufacturer with automotive customers will have long-term >>>> guarantees. (I don't know how common 20 year lifetimes are - I >>>> haven't had to look /that/ long.) >>>> >>>> Second sources are not easy if you need /exact/ matches, but your >>>> friendly local distributors can help. >>> >>> You seem to miss the point. The 8051 IS the only processor that you >>> can find with second sources and guarantees of extreme longevity. >>> That's why he used it. >>> >> >> I didn't miss the point. The 8051 is not a processor that you can buy, >> it is a family of related variations of a core that is found in many >> microcontrollers. Virtually all 8051-based microcontrollers are >> single-source, and have the same types of longevity as other >> microcontrollers (10 years+ is typical, with 15 to 20 years or more for >> specialised parts). There is nothing special about the 8051 except that >> it is already old, and you'd have a hard time finding 8051 parts now >> that are guaranteed to be produced for /another/ 20 years - it doesn't >> help if it has been produced for the /past/ 20 years. > > Everything you say is true except for the parts that aren't. That many people make non-pin compatible variations on the 8051 is irrelevant. The fact is that there are second sources of pin compatible parts and that there are makers who are promising to supply the devices for a period of time. If you want confirmation of this you can go over to sci.electronics.design and post to Joerg. He is the one using the 8051 for these exact reasons. > > >> What you said was that the 8051 was the only "processor" that fitted the >> requirements of longevity. Now, it might well be that in this >> particular case, the chip you have is the only option that fitted. But >> it does not mean that it is true in general. I am confident that if I >> asked my distributors, they could find long-lived ARM microcontrollers >> (as well as PowerPC, and various other cores). Second source for an >> identical chip is hard to find, but they do exist (especially if you are >> willing to pay for it - there are companies that store bare dies for >> decades and will package them when a customer needs them, and there are >> companies that do specialised versions of standard chips for military or >> other special-needs customers). But you only talked about second source >> for the core - presumably it is software and tools that are qualified >> here, rather than the hardware. > > Not sure what you are talking about companies storing bare die. The context was standard devices that aren't going into space or otherwise have a hugely inflated price. > > I have yet to find a single ARM processor that had a true, pin compatible second source. There may be some in the automotive sector that are not available to those needing thousands a year rather than millions per year. But even then I've not found them. Each manufacturer produces their own line of products and compete based on the little differences that make their product "better" rather than being a second source to someone else. > > Not sure what you are trying to say about the tools. Anyone who wishes to maintain a design for >10 years or even five needs to archive the tools and and the machine they run on. I presently have that problem. The tools from >10 years ago still seem to run on my recent PC running Win10, but who knows what will happen next time I update? I may need to resurrect a 15 year old desktop running Win2k. >
Second source in the context of processors is overrated. Rewind about 40 years when the 6809 had a second source, a 6309 from Hitachi to no effect on the processor lifespan. With some luck and a lot of careful consideration one can pick a processor which will live for >10 years. I have managed that with the 68hc11, 68340, mpc5200b, mcf52211... not so sure about the mpc824x, but my design with it did not live long enough for this to matter (yet it did a good job aiding the migration 68k -> power). I can't speak firsthand on the toolchains people use as those I use are 100% mine, I can even run on a VM emulating the 6809 in dps windows what I ran before I had all toolchains mine, things I have written back in the 80-s. I know some people archive entire machines so they can replicate the design process years later if needed; but with flash memories relying on tinier and tinier gate charges this becomes less and less viable (I have a dps machine on my desk 20+ years old, and I have 10+ years old netMCA devices in the field which show no issues... I think flash memories of a few megabytes which will hold charge for 10+ years are still available, time will tell. Dimiter ====================================================== Dimiter Popoff, TGI http://www.tgi-sci.com ====================================================== http://www.flickr.com/photos/didi_tgi/
Il 11/07/2020 12:12, dalai lamah ha scritto:
> Un bel giorno pozz digit&ograve;: > >> https://www.safetty.net/download/pont_pttes_2001.pdf >> >> Page 13 >> Note carefully what this means! There is a common misconception among >> the developers of embedded applications that interrupt events will never >> be lost. This simply is not true. If you have multiple sources of >> interrupts that may appear at &lsquo;random&rsquo; time intervals, interrupt >> responses can be missed: indeed, where there are several active >> interrupt sources, it is practically impossible to create code that will >> deal correctly with all possible combinations of interrupts. >> >> Those words are incredible for me. I suspect I didn't get the point of >> the author. I don't think that interrupt events on modern microntrollers >> can be lost in some circumstances. > > I don't know 8051, but every MCU I've ever used had interrupt flag > registers. When an interrupt comes a flag is set, and then until you don't > reset this flag (generally within the ISR) the subsequent interrupts on the > same source are ignored. Therefore it is true that subsequent interrupts on > the same source can be lost if they are too close [1]. > > But if the interrupts are different, this is not true. Two different > interrupts events can come even at the exact same time, and both will be > flagged. Of course they won't be serviced at the same time (unless you have > a multicore CPU, maybe) but they will be "queued" and processed one at a > time. > > [1] Actually I think that there are some MCUs that have a counter instead > of a flag for each interrupt source, and therefore they can queue multiple > interrupts of the same source before they are processed. Then each call of > the ISR will decrement the counter.
This is exactly what I thought when reading that sentence in the book.

The 2024 Embedded Online Conference