On 21/04/2020 04:57, Rick C wrote:> On Monday, April 20, 2020 at 11:44:53 AM UTC-4, David Brown wrote: >> On 19/04/2020 23:47, Rick C wrote:>> Do you know anything about Efinix? > > I've looked at them, I think I even exchanged some email with them. There are maybe four new FPGA companies suddenly (at least three in China, Anlogic, AGM and Gowin). I'm guessing some essential patents expired. Efinix is another one. I'm waiting for them to have product on the shelves someplace, preferably like Digikey. > > The only unique feature seems to be their "patented Quantum™ architecture" which appears to be the sort of thing where logic and routing are interchangeable which means you can't use as much logic as they claim because a lot of it has to be used for routing. Not sure. > > What interests you about them?Nothing in particular - it's just in the discussions about FPGA's and costs, I looked at Digikey's FPGA offerings sorted by price, and one of their devices was the cheapest of all. I'd never heard of them.> > I'm looking at Gowin pretty hard. They have some with a CPU, but their product line isn't filled out yet. Also some parts have pSRAM and/or DRAM die in the package. They are first introducing parts that customers are asking for of course. They aren't very good at indicating which parts are currently available, but they do have salesmen who will communicate with you. Many startups only want to talk to the big fish. >
Custom CPU Designs
Started by ●April 16, 2020
Reply by ●April 21, 20202020-04-21
Reply by ●April 21, 20202020-04-21
On 21/04/2020 17:26, Rick C wrote:> On Tuesday, April 21, 2020 at 10:20:41 AM UTC-4, Theo wrote: >> David Brown <david.brown@hesbynett.no> wrote: >>> On 17/04/2020 17:34, Theo wrote: >>>> I think part of the problem is the ARM licensing cost - if the >>>> license cost is (random number) 5% of the silicon sticker price >>>> that's fine when it's a $1 MCU, but when it's a $10000 FPGA >>>> that hurts. >>> >>> I'm not sure that's valid. First, do you know that the ARM >>> licensing costs work that way? >> >> I have no insight into the licensing contracts (which are likely >> very confidential), but what I understand is that all Stratix 10 >> parts have an ARM but relatively few have it enabled. Additionally >> I understand the licence cost is only paid for parts where it is >> enabled. From that I surmise that the licence cost is significant; >> if the cost was minimal then why have a separate SKU without the >> ARM? > > They do the same thing with the FPGA itself. It is not inexpensive > to spin the masks for FPGAs at the bleeding edge of semiconductor > fabrication technology. So they sell parts with more or less of the > part enabled or even just tested (testing cost in an FPGA is not > inexpensive). So you buy an FPGA with 50,000 LUTs or you buy one > with 25,000 LUTs and it's the same part. The 50,000 part has the > entire chip tested, the 25,000 LUT part only tests the section with > 25,000 LUTs you will be using. They will get the price even lower if > you are buying a large quantity and you give them your design, so > they only test the parts of the chip your design uses! > > So don't test the CPU and don't pay the license fee. Save some on > the license and save more on not testing the CPU and various > supporting logic. > > >> One other possibility is that a separate SKU allows the ARM to be >> faulty and the part still saleable, but it seems that ballpark >> 80-90% of the eval boards I see are offering parts without ARMs. >> Which suggests there's a strong motivation not to use it. > > I'm told if a chip fails a test, it is tossed. The savings comes > from not testing a section to begin with. Testing equipment is not > cheap and FPGAs take a lot of time on the beast.That would make it different from many other large, complex parts where disabling failed sections and even having redundant parts in the design increase overall yields and lower costs. But I guess it depends on a balance between yields, types of failure, and testing costs.> > >>> (And whatever the numbers, RISC-V changes things significantly.) >> >> I'm not sure RISC-V is to the level of maturity for baking a Cortex >> A53 equivalent into a critical product. > > Not sure what you are trying to say, but Microsemi is coming out with > a RISC-V FPGA device family this year. >
Reply by ●April 21, 20202020-04-21
On 21/04/2020 02:23, Clifford Heath wrote:> On 21/4/20 10:01 am, Rick C wrote: >> On Monday, April 20, 2020 at 8:59:36 AM UTC-4, Clifford Heath wrote: >>> On 20/4/20 4:52 am, Przemek Klosowski wrote: >>>> On Thu, 16 Apr 2020 17:13:41 -0700, Paul Rubin wrote: >>>> >>>>> Grant Edwards <invalid@invalid.invalid> writes: >>>>>> Definitely. The M-class parts are so cheap, there's not much point in >>>>>> thinking about doing it in an FPGA. >>>>> >>>>> Well I think the idea is already you have other stuff in the FPGA, so >>>>> you save a package and some communications by dropping in a softcore >>>>> rather than using an external MCU. I'm surprised that only high end >>>>> FPGA's currently have hard MCU's already there. Just like they >>>>> have DSP >>>>> blocks, ram blocks, SERDES, etc., they might as well put in some CPU >>>>> blocks. >>>> >>>> Maybe Risc-V will catch on. The design is FOSS, as is the toolchain >>>> (GDB >>>> and LLVM have Risc-V backends already for a while), and the simple >>>> versions take very few gates. >>>> https://github.com/SpinalHDL/VexRiscv >>>> https://hackaday.com/2019/11/19/emulating-risc-v-on-an-fpga/ >>>> >>> >>> There's a lot of push in the direction of the Power architecture. What >>> does that look like in FPGA? >>> >>> CH >> >> Do you mean the Power PC? That was the hard IP used in the very old >> and possibly obsolete Virtex II Pro devices. >> >> Why do they have to use such goofy names like "Pro" or "Polarfire". >> Do they really think that sells even one frigging chip? I would be so >> much more inclined to dig through their information if they just had >> decent names that give you some idea of the technical details >> including the heritage. > > Hah! I hear you. :) > > Yes, I mean PowerPC, specifically OpenPoWER: > <https://en.wikipedia.org/wiki/OpenPOWER_Foundation> > > A friend is a fan. Me, I haven't read much about it. >I haven't seem much of PowerPC in recent times. For a while they were popular for high-end microcontrollers, especially in the automotive industry, but that seems to be fading - since NXP took over Freescale, I think the PPC lines are dying out in favour of more ARM lines. Power, which is a different beast (with shared ancestry and a certain degree of compatibility), is alive and reasonably well - it's a popular choice for really big iron. I've seen benchmarks showing Power9 chips giving more bitcoins per MWt than dedicated miner ASICs, if you are into that sort of thing. I can't see anyone embedding a Power core in an FPGA...
Reply by ●April 21, 20202020-04-21
On 2020-04-21 18:58, David Brown wrote:> On 21/04/2020 03:27, Rick C wrote: >> On Monday, April 20, 2020 at 10:53:49 AM UTC-4, David Brown wrote:[snip]>>> Task prioritising is an important issue. But it is not just for >>> multitasking on a single cpu. If you have a high priority task A that >>> sometimes has to wait for the results from a low priority task B, you >>> have an issue to deal with. That applies whether they are on the same >>> cpu or different ones. On a single cpu, you have the solution of >>> bumping up the priority for task B for a bit (priority inheritance) - on >>> different cpus, you just have to wait.There are similar multi-core priority-bumping schemes, but they are more complex and have more overhead, of course.>> How is that an issue? Isn't task A stalled when it is waiting which >> allows task B to run? > > Yes. But it means task A - the high priority task - can't be completed > as fast as you had wanted.If there are no other tasks on this core, there is no extra delay for A -- it waits just long enough for B to complete the/a result for A. The real problem occurs when there is some other task C, _not_ logically connected to A or B, with a priority higher than B but lower than A. Task C can then delay B, and therefore also A, for whatever duration C runs. As David said, this priority inversion can be solved by temporarily increasing B's priority to A's priority until B has executed far enough to let A continue. In the Ada language, the "protected object" inter-task communication mechanism implements this priority juggling automatically. The programmer only has to define the basic task priorities and the "ceiling priorities" for the protected objects. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by ●April 21, 20202020-04-21
On 21/04/2020 10:00, pault.eg@googlemail.com wrote:> On Monday, April 20, 2020 at 4:58:57 PM UTC+1, David Brown wrote: >> On 19/04/2020 20:52, Przemek Klosowski wrote: >>> On Thu, 16 Apr 2020 17:13:41 -0700, Paul Rubin wrote: >>> >>>> Grant Edwards <invalid@invalid.invalid> writes: >>>>> Definitely. The M-class parts are so cheap, there's not much point in >>>>> thinking about doing it in an FPGA. >>>> >>>> Well I think the idea is already you have other stuff in the FPGA, so >>>> you save a package and some communications by dropping in a softcore >>>> rather than using an external MCU. I'm surprised that only high end >>>> FPGA's currently have hard MCU's already there. Just like they have DSP >>>> blocks, ram blocks, SERDES, etc., they might as well put in some CPU >>>> blocks. >>> >>> Maybe Risc-V will catch on. The design is FOSS, as is the toolchain (GDB >>> and LLVM have Risc-V backends already for a while), and the simple >>> versions take very few gates. >>> https://github.com/SpinalHDL/VexRiscv >>> https://hackaday.com/2019/11/19/emulating-risc-v-on-an-fpga/ >>> >> >> Has anyone here tried SpinalHDL ? I had a look at it, and it seems very >> appealing. > > Haven't tried it, but did follow the link and looked at the code. > > I thought it was interesting in that registers are explicit rather than using a rising_edge(clk) if statement. > > I followed some of the links on https://github.com/riscv/riscv-cores-list and found Chisel to be similar to SpinalHDL. > > I did a quick compare of UART TX code for Spinal and Chisel from here > https://spinalhdl.github.io/SpinalDoc/spinal/examples/uart/ and here > https://github.com/nyuichi/chisel-uart/blob/master/src/main/scala/Uart.scala. > > Overall I didn't see anything compelling compared to VHDL, in that admittedly very limited quick look :)If you want to read more: <https://cdn.rawgit.com/SpinalHDL/SpinalDoc/5d3d56b5/presentation/en/workshop/taste.pdf> <https://spinalhdl.github.io/SpinalDoc-RTD/SpinalHDL/miscelenea/regular_hdl.html#regular-hdl>> > The Spinal website says one of it's advantages is "Reduce code size - By a high > factor, especially for wiring. This enables you to have a better overview of > your code base, increase your productivity and create fewer headaches.". > That looks to be a false statement (I didn't look at the wiring). Maybe there > is something in the other stated advantages but I only looked at the above. >I haven't used SpinalHDL at all - I haven't had the chance to do much programmable logic for many years. But a /long/ time ago, I did some using Confluence which was a functional programming language for hardware design. For testing, I remember writing a simple PWM module with an Avalon bus slave (this was on a NIOS soft processor system). It was designed to match the same specs as an example from Altera, available in both Verilog and/or VHDL. My Confluence design ended up roughly similar in terms of logic area and speed, but it had about a tenth of the lines of source code - most of which was a generic bus and register interface that could be re-used unchanged for other bus slaves.
Reply by ●April 21, 20202020-04-21
On Sat, 18 Apr 2020 15:06:53 +0200, David Brown <david.brown@hesbynett.no> wrote:> >Implementing an Ethernet MAC on an XMOS is pointless. Implementing an >EtherCAT slave is not going to be much harder for the XMOS than a normal >Ethernet MAC, but is impossible on any microcontroller without >specialised peripherals.Traditional industrial protocols, like Profibus and Modbus (with all their variants) have a quite high overhead Thus, if a slave only wants to communicate a few bits or a single byte over the network, it will suffer a very low transfer efficiency using standard protocols. With EtherCAT, it is possible to make very small nodes with only a few bits added/removed from the frame circulating around the industrial plants with only a few bit time additional propagation delay in each node. So it looks good. However, EtherCAT nodes are still quite expensive and if you add/drop only a few bits in each node doesn't make economical sense. What is the point of using multicore processors, if a single core can perform the basic EtherCAT node functionality. You can't cut the multicore chip and distribute it to multiple physically separate nodes:-). In addition, if there are dozens of series connected twisted pair connectors, what is the electromechanical reliability of each connection ? A single fault will prevent the Ethernet frame circulating back to the master. I much more prefer a dual layer approach, with CANbus (or CAN FD) up to a few meters transferring a few bits or a byte or two around the CAN bus and using concentrator nodes with communicate to a higher level systems using some traditional protocols, transferring perhaps 100 bytes in a single transaction.
Reply by ●April 21, 20202020-04-21
On 2020-04-21, upsidedown@downunder.com <upsidedown@downunder.com> wrote:> With EtherCAT, it is possible to make very small nodes with only a few > bits added/removed from the frame circulating around the industrial > plants with only a few bit time additional propagation delay in each > node. So it looks good.For something like simple digital I/O, you don't need a uController at all, the Beckhoff ET1100 EtherCAT controller can act as a stand-alone slave device.> What is the point of using multicore processors, if a single core can > perform the basic EtherCAT node functionality.What if you also want to run a web server and some other heavy-duty, encrypted, protocols under Linux in your EtherCAT slave? The most practical way to do that is with something like the Renesas RZ/N1D which has an EtherCAT controller, a Cortex M3 optimized for real-time stuff, and a couple Cortex A7 cores for running Linux. [There are other vendors with similar mult-core uControllers.]> In addition, if there are dozens of series connected twisted pair > connectors, what is the electromechanical reliability of each > connection ? A single fault will prevent the Ethernet frame > circulating back to the master.If single point of failure is an issue, then you can connect the EtherCAT devices in a loop to get some redundancy. -- Grant
Reply by ●April 21, 20202020-04-21
On Tuesday, April 21, 2020 at 11:25:37 AM UTC-4, David Brown wrote:> On 21/04/2020 15:15, Rick C wrote: > > On Tuesday, April 21, 2020 at 8:02:18 AM UTC-4, David Brown wrote: > >> On 21/04/2020 02:36, Rick C wrote: > >>> On Monday, April 20, 2020 at 9:58:09 AM UTC-4, David Brown wrote: > >>>> On 18/04/2020 21:38, Rick C wrote: > >>>>> On Saturday, April 18, 2020 at 9:06:57 AM UTC-4, David Brown > >>>>> wrote: > >>>>>> > >> > >>>> I need an MCU with 4 EtherCAT slave channels. There are exactly 0 > >>>> on the market. There are only two or three in total - from all > >>>> manufacturers together - with even /one/ EtherCAT slave. > >>> > >>> Yes, because EtherCAT is not widely used at the moment. I had never > >>> heard of it. When I read about it I see some car makers are looking > >>> at adopting it. Once that happens there will be MCUs supporting the > >>> interface. Until then it is a niche market. Am I wrong? I don't > >>> see any indication there is much out there either in the supply or > >>> demand side. > >>> > >> > >> EtherCAT has been increasingly popular in industrial automation (the > >> world of Programmable Logic Controllers, Profibus, Frequency Converters, > >> etc.). > > > > You say "increasingly popular" but if it were being used in higher volumes MCUs with EtherCAT interfaces would be available. MCU makers aren't stupid and love to have any advantage over the competition they can find. > > > > So "popular" has to be something other than unit volume. > > I wrote "increasingly popular", because it is becoming increasingly > popular. That means both that more and more people are using EtherCAT > devices, more and more EtherCAT devices are being installed, more and > more EtherCAT devices are being developed, more and more EtherCAT MCUs, > standand-alone peripherals, and FPGA cores have become available in > recent years. > > In the big picture of MCU sales, EtherCAT usage is tiny. /Really/ tiny. > Less tiny than five years ago, but still tiny. Making an EtherCAT > peripheral in a MCU is not an insignificant investment for an MCU > company - it would be a very big investment. They won't do that until > they foresee a sizeable market - far greater than the automation market. > Until then, it will be left to the few who are heavily involved in > this sort of thing, such as Infineon (Siemens has always been a big > player in the automation world).So your use of "more and more" is not relevant to the MCU market which is what I've been talking about. I don't know anything about the automation market, so I have to assume it is not so large if the MCU makers are ignoring a peripheral that is used "more and more" in that market for some value of "more and more". I know designing a CPU chip is costly, but the cost depends greatly on the process used. The CPUs in a cell phone cost millions just for the mask set. CPUs on the 150 nm node with 256 kB of flash, not so much. What level of CPU is married to a EtherCAT interface in the designs you see? I was thinking a CM4 would be appropriate.> >> It's the stuff that runs factories, and programmed and set up by > >> automation engineers that are a kind of cross between electricians and > >> software developers. Characteristics of electronics in this field are > >> that they are often quite expensive, but designed to fit together and > >> "just work" even when made by different companies. Most of the stuff is > >> made by relatively few large companies, rather than small companies. > >> Implementing many of the protocols involved are quite horrible - badly > >> specified (with large fees to be paid before you can even see the > >> documents), overly complex, and typically require complicated XML-based > >> "descriptors" that make USB descriptors look simple. But while that > >> stuff makes them unpleasant to implement, it makes them very easy to use > >> for the people actually making the automation setups. > >> > >> EtherCAT is also quite complicated, but a lot of it is handled by > >> dedicated slave controller chips and software stacks that are available. > > > > So what sort of price premium are these peripheral chips adding to the BoM? > > > > I don't deal with prices at that level, but Digikey puts them at about $10.That's pretty significant compared to a $5 XMOS or a $3 MCU chip.> >> My point was that XMOS could have made something /different/. Something > >> that is known well enough for people to see the point even if they > >> hadn't thought of using EtherCAT themselves (there is a Wikipedia entry, > >> and plenty of Youtube videos on EtherCAT), but not so standard that > >> everyone and their cat has support. (XMOS /have/ done that with the > >> audio stuff.) > > > > You are suggesting that EtherCAT could be the "killer app" for XMOS? I don't know. But as with virtually every other type of peripheral, once the demand increases it will be part of everything from 8051s to high end ARMs. > > > > No, I have /not/ been suggesting EtherCAT would be a killer app for > XMOS. You seem to have combined various posts, adding 2 plus 3 to get 17.Ok, whatever. I asked a question.> I said it would be a good example of the kind of thing you could do with > an XMOS that you could not easily do with a standard MCU. Of course > /some/ people might buy an XMOS because they want EtherCAT. But demo > applications and examples are primarily marketing - it's about showing > how great the chip and tools are at doing one thing, so that customers > will use the device for their own strange needs. Far more people would > use a standard Ethernet core than an EtherCAT core, but the EtherCAT > core would be a better advertisement for the chip and toolset.Not sure why you say demo apps are about marketing. Provide a reference design and people will use it. If the EtherCAT chip is $10 and the XMOS is $5 and you can put one on a single CPU in the chip, that's a slam dunk. The market may not be large for the MCU world, but it sounds large for XMOS.> >> It is a type of MAC, but not a normal Ethernet MAC. The physical side > >> (the PHY) is the same. But where an Ethernet MAC reads incoming packets > >> in and stores them in ram somewhere, and collects outgoing packets from > >> ram and sends them out, an EtherCAT MAC reads and writes bits of the > >> packet while passing it on towards the next node. > > > > Ok, sounds like there is a lot more to it. I guess I'll have to read up about it at some point. > > > > I'm happy to work on an EtherCAT design for an FPGA. I'm not really in the business of selling IP but it's a possibility. Maybe program it into an Ice40 chip and sell it as a product. They have non-volatile, one time programmable config memory. So they can work as preprogrammed devices. > > > > I have no doubt that you have the ability to make an EtherCAT core in an > FPGA. But I have absolutely no idea whether it would be a good economic > investment for you to do so. And while you will probably find it > interesting to read a little about it, I would not think making an > EtherCAT core is the kind of project you might do for fun.The problem I usually have with this sort of thing is finding enough spec to be able to have confidence I can produce something that will work well. If the EtherCAT interface is as large as a Ethernet MAC I'm not sure I could put more than one in a small, $3 FPGA. It's been a long time since I worked with Ethernet at a low level and I can't recall much of the detail. I think I was working more with ATM over T/E networks. One of the other engineers on the team was doing an Ethernet design and I was advising him. He was funny. He knew what he was doing, but every once in a while he wanted to be assured he was on the right track I think. He'd come to my office, ask a few questions. Then as I started to sink my teeth into the problem he'd pack up and leave. lol Other people's designs are always more fun.> >> I said I haven't needed them, except for one project a fair number of > >> years ago. It is even longer since I have needed an FPGA (though a > >> number of our customers have FPGA boards). > > > > I've realized for a number of years FPGAs are underutilized. It's largely a matter of unfamiliarity or misconceptions. I still run into people who think they have to be high power consumption devices. They can be small, power efficient and high performance all in a single chip. Development difficulty is overstated a lot too. I find them much easier to work with than MCUs. > > > > Habit and familiarity is the key. Very few people find them easier to > work with than MCUs, even though /you/ do.Yep. I don't restrict myself to any given techniques. I don't find the XMOS good for much other than a small set of apps that can utilize the independent cores and at the same time need the complexity of larger bodies of software. Anything that doesn't need the multiprocessing can be done on an MCU and anything that doesn't need a larger code base can be done in an FPGA, likely for less money in both cases. I like the idea of the GA144 and have a couple of design ideas where it would be a good fit, I just haven't bothered to explore them much more than proving they could work. I would like to get back to my soft processor designs. But I have some real work on my test fixture I've put off for some time now.> > That's why I feel the XMOS has a limited niche between MCUs and FPGAs. They really aren't general purpose because they are pricey. An FPGA can be cheaper in many situations. But FPGAs aren't great for a full TCP/IP stack, etc. So different horses for different courses. > > > > On that last sentiment, I hope we can all agree!I've never said anything different. I just don't find a wide range of apps for the XMOS and apparently the market doesn't either. I was really surprised they only had 8 million in revenue a year or two ago. That doesn't even show up on the semiconductor radar. -- Rick C. ----- Get 1,000 miles of free Supercharging ----- Tesla referral code - https://ts.la/richard11209
Reply by ●April 21, 20202020-04-21
On Tuesday, April 21, 2020 at 11:58:43 AM UTC-4, David Brown wrote:> On 21/04/2020 03:27, Rick C wrote: > > On Monday, April 20, 2020 at 10:53:49 AM UTC-4, David Brown wrote: > >> > >> Beyond that, you have mostly the same issues. Deadlock, livelock, > >> synchronisation - they are all something you have to consider whether > >> you are making an FPGA design, multi-tasking on one cpu, or running > >> independent tasks on independent processors. > > > > I've never heard of livelock until this discussion. Reading about it I have no idea what the parallel would be in hardware design. > > Livelock is when bits of a system are running fine, but the overall > system is not making progress. Often it is temporary and resolves > itself (unlike deadlock). > > A hardware example might be if you have a crossbar for multiple bus > masters to access a single slave device. You need some way of deciding > which master gets access when both want it simultaneously. Perhaps you > decide that master A is more important, and always gets priority. Then > if master A hogs the bus, master B never gets a chance - livelock. > > Often these kinds of things are straightforward to avoid as long as you > think about what can happen. That applies to software and hardware.Ok, but that is very simple and doesn't sound like an issue that is very hard to deal with. In general I would not even think of it as a category of problems I need to think about. It's just an obvious issue in a given design. There are many of those.> > I don't even know of an example of deadlock in hardware design. > > The simplest way to get a deadlock is to have two shared resources, and > two processes (hardware modules, software tasks, whatever) that need > both the resources, but acquire them in different orders. But you don't > usually get such simple cases, as they are so obvious.That was my point, I've never designed hardware that had "resources" to allocate. Maybe my designs are just too simple. I do like to keep things simple when I design. I've never not been able to do that when designing hardware. I'm familiar with deadlock from software design, just don't see it in hardware so far.> > So clearly these are not such important issues in hardware design. > > If your designs don't involve much in the way of shared resources, > you're not going see them. (The same applies in software.) It is also > perfectly possible that the way you design your systems, they naturally > don't occur - or that you think of them as bugs, hangs, blocks or stops > rather than as "deadlocks". (The same applies in software.) It is also > possible, though of course /highly/ unlikely, that your systems /do/ > have the risk of deadlocks and they just haven't happened yet. (The > same applies in software.)I understand the issues and both my hardware and software don't have such problems. My software has never been multitask. My hardware has always had simple relationships. I guess I just don't design complicated things.> > I expect the difference is that while hardware design uses signals (in fact that is the term for a "variable" in VHDL, not to be confused with a variable in VHDL which has limited scope and other limitations) it does not have resources to be allocated or seized or whatever the correct term is. If it does for synthesis, I don't know what that would be. > > > > You can have shared resources in hardware too. > > It is perhaps fair to say that the way you design hardware makes shared > resources stand out a bit more - you have explicit sharing with > cross-switches, multiplexors, etc. And that might mean that > deadlock-free solutions are mostly so obvious that you don't see them as > a potential problem. I discussed previously about thinking the > "hardware way" for shared data in software development - that also makes > it very easy to avoid deadlock.Or just not using multiple tasks accessing the same data. I've never found the need.> >> Task prioritising is an important issue. But it is not just for > >> multitasking on a single cpu. If you have a high priority task A that > >> sometimes has to wait for the results from a low priority task B, you > >> have an issue to deal with. That applies whether they are on the same > >> cpu or different ones. On a single cpu, you have the solution of > >> bumping up the priority for task B for a bit (priority inheritance) - on > >> different cpus, you just have to wait. > > > > How is that an issue? Isn't task A stalled when it is waiting which allows task B to run? > > Yes. But it means task A - the high priority task - can't be completed > as fast as you had wanted.If task A is waiting for task B and you don't like the delay, that's bad design. If it has to wait on task B by definition of the problem, then that's a limitation you have to live with. This isn't deadlocking unless task B is also waiting on task A. If you have this problem then you have not decomposed the problem correctly.> > I recall reading about priority issues such as "priority inversion" some time back. But not having written any multitasking software in a long time it has all vanished. I'm pretty glad of it too. It was just a lot of ugly stuff I thought. My biggest problem writing VHDL is trying to manage the verbosity. But then there are issues in the language I have simply internalized and don't think about. > > > > I have thought many times about the contradiction of my liking VHDL, a rather restrictive (in theory) and verbose language as well as a much simpler, easy and concise language as Forth. Coding for Forth is like working in my basement with hand tools. Coding in VHDL is like working on a sculpture for a museum. Never the twain shall meet. > > > > I program mostly in C and Python. It's hard to pick two software > languages that are further apart - so I understand what you mean here.-- Rick C. ----+ Get 1,000 miles of free Supercharging ----+ Tesla referral code - https://ts.la/richard11209
Reply by ●April 21, 20202020-04-21
On Tuesday, April 21, 2020 at 12:02:37 PM UTC-4, David Brown wrote:> On 21/04/2020 04:57, Rick C wrote: > > On Monday, April 20, 2020 at 11:44:53 AM UTC-4, David Brown wrote: > >> On 19/04/2020 23:47, Rick C wrote: > > >> Do you know anything about Efinix? > > > > I've looked at them, I think I even exchanged some email with them. There are maybe four new FPGA companies suddenly (at least three in China, Anlogic, AGM and Gowin). I'm guessing some essential patents expired. Efinix is another one. I'm waiting for them to have product on the shelves someplace, preferably like Digikey. > > > > The only unique feature seems to be their "patented Quantum™ architecture" which appears to be the sort of thing where logic and routing are interchangeable which means you can't use as much logic as they claim because a lot of it has to be used for routing. Not sure. > > > > What interests you about them? > > Nothing in particular - it's just in the discussions about FPGA's and > costs, I looked at Digikey's FPGA offerings sorted by price, and one of > their devices was the cheapest of all. I'd never heard of them.There's something odd about the pricing. The $1 part is a T4F49C2 at qty 5,000. The T4F81C2 is $4.50 at the same qty. The only difference is the package with 49 balls/33 I/Os vs. 81 balls/59 I/Os. Double the number of I/Os and get a 4x price increase with the same die??? My main concern with their parts is the talk of logic and routing being a tradeoff. So how many logic elements of the 3888 available are typically usable? Until I know that I can't even estimate design issues in my head. I guess the way to answer that question is to try a design in their tools. I have a design using over 80% of a 3,072 LUT4 device. I could try using that in the T4F part and see if it fits. It's a lot of files as I tried to make many smaller modules to get fine grain testing. Or maybe I'll construct a 1 file test case with some sort of generated logic in just a few lines. Trouble is the routing issue. The design needs to have a typical amount of routing since the purpose of this is to see how much logic is taken away to perform the routing. My 80%, 3k LUT design may end up needing the 8k Efinix or even the 13k part. Who knows? They also don't have packages I like. That's the FPGA story of my life. Gowin has some potential offering good packages, but I don't know when they will have the goldilocks combination available (package and features like a CPU). They are sold by Edge in the US who has a crappy web site. Digikey/Mouser are soooo much better. -- Rick C. ---+- Get 1,000 miles of free Supercharging ---+- Tesla referral code - https://ts.la/richard11209







