EmbeddedRelated.com
Forums

Custom CPU Designs

Started by Rick C April 16, 2020
On Friday, April 17, 2020 at 12:54:35 AM UTC-4, Clifford Heath wrote:
> On 17/4/20 1:01 pm, Rick C wrote: > > On Thursday, April 16, 2020 at 10:35:07 PM UTC-4, Clifford Heath wrote: > >> > >> Some US language is ancient English (but modern English has moved on), > >> and sometimes its the reverse. "Aluminium/Aluminum" is an example where > >> English moved on (to improve standardisation). > > > > Sorry, can you explain the aluminium/aluminum thing? I know some people pronounce it with an accent (not saying who) but I don't get the English moved on thing. > > Aluminum is the original name, which Americans retained when the English > decided to standardise on the -ium extension that was being used with > most other metals already. > > That's my understanding anyhow. > > CH
OK, thanks -- Rick C. --- Get 1,000 miles of free Supercharging --- Tesla referral code - https://ts.la/richard11209
On 17/04/2020 03:37, Rick C wrote:
> On Thursday, April 16, 2020 at 8:13:45 PM UTC-4, Paul Rubin wrote: >> Grant Edwards <invalid@invalid.invalid> writes: >>> Definitely. The M-class parts are so cheap, there's not much >>> point in thinking about doing it in an FPGA. >> >> Well I think the idea is already you have other stuff in the FPGA, >> so you save a package and some communications by dropping in a >> softcore rather than using an external MCU. I'm surprised that >> only high end FPGA's currently have hard MCU's already there. Just >> like they have DSP blocks, ram blocks, SERDES, etc., they might as >> well put in some CPU blocks. > > There's a chip that goes the other direction. The GA144 has 144 very > fast, very tiny CPUs in an FPGA fashion with no FPGA fabric. > > It's not a popular chip because most people are invested in the > single CPU, lots of memory paradigm and focusing their efforts on > making a single CPU work like it's many CPUs doing many different > things. Using this chip requires a very different mindset because of > the memory size limitations which are inherent in the CPU design.
Could it be that the chip is not popular because it is not a good fit for many applications? A signal fast core is more flexible and useful than many small cores, while an FPGA can do far more than the tiny CPUs in the GA144. You need outstanding benefits of something like the GA144 before it makes sense to use it - and that's not something I have seen when I have looked through the website and read the information. It is not enough to simply be an interesting idea - it is not even enough to be useful or better than alternatives. In order to capture customers, it has to be so hugely better that it is worth the cost for people to learn a very different type of architecture, a very different programming language, and a very different way of looking at problems. And frankly, the GA144 is not impressive or better at all. Look at the application notes and examples - you've got a 10 Mbit Ethernet controller that requires a third of the hardware of the GA144. An MD5 hash takes over a tenth of the hardware and has a speed similar to a cheaper microcontroller from the year the GA144 was made, 2012. And it all requires programming in a language where the /colour/ of the words is semantically and syntactically critical. It is nice to see people coming up with new ideas and new kinds of architectures, but there has to be big benefits if it is going to be anything more than an academic curiosity. GA144 is not it. (The XMOS is far more realistic and with greater potential - that's a family of devices that people can, and do, use.)
> > That said, FPGA makers are rather stuck in the -larger is better- rut > mostly because their bread and butter customer base, the comms > vendors, want large FPGAs with fast ARM processors if any.
It is those customers that pay for the development of the whole thing. Small customers then get the benefits of that.
> > I recall a rather old Xilinx line, Virtex II Pro had Power PC CPUs I > think. Either two or only one or none in the smallest device. > Pre-ARM days I guess. >
Yes, PPC cores used to be popular on FPGAs and ASICs. MIPS were also popular in ASICs, especially in the communications and network industry, but I don't remember ever hearing about them on an FPGA. I wonder why not - they would be at least as good a fit as PPC.
> Atmel had a line of not very well received FPGAs. They had a version > which incorporated an 8 bitter, possibly a AVR, but I don't think it > was that. Too long ago to remember. It had a 4-5 letter acronym for > a name. SLIC or something like that.
I remember those, though I never used them. Yes, you had an AVR core with an FPGA fabric. I'm guessing there was a big customer that used them - a great deal of AVR devices, especially in their earlier days, came from Atmel's big ASIC customers. They basically made parts to order, with the combination of core, memories, peripherals and packaging that a customer wanted for a mass production, and then they generalised some of the parts and made them available for public purchase.
> > ActelxxxxxMicrosemixxxxxxxxxMicrochip has some FPGAs with an ARM CPU. > A bit pricey for me and they only come in large packages or very fine > pitch BGAs. >
On 17/04/2020 06:54, Clifford Heath wrote:
> On 17/4/20 1:01 pm, Rick C wrote: >> On Thursday, April 16, 2020 at 10:35:07 PM UTC-4, Clifford Heath wrote: >>> >>> Some US language is ancient English (but modern English has moved on), >>> and sometimes its the reverse. "Aluminium/Aluminum" is an example where >>> English moved on (to improve standardisation). >> >> Sorry, can you explain the aluminium/aluminum thing?&nbsp; I know some >> people pronounce it with an accent (not saying who) but I don't get >> the English moved on thing. > > Aluminum is the original name, which Americans retained when the English > decided to standardise on the -ium extension that was being used with > most other metals already. > > That's my understanding anyhow. > > CH
Yes, that is correct (AFAIK). This is one of the differences between spoken English and spoken American that always annoys me when I hear it - I don't really know why, and of course it is unfair and biased. The other one that gets me is when Americans pronounce "route" as "rout" instead of "root". A "rout" is when one army chases another army off the battlefield, or a groove cut into a piece of wood. It is not something you do with a network packet or pcb track! I'm sure Americans find it equally odd or grating when they hear British people "rooting" pcbs and network packets. :-)
On 17/04/20 09:02, David Brown wrote:
> On 17/04/2020 03:37, Rick C wrote: >> On Thursday, April 16, 2020 at 8:13:45 PM UTC-4, Paul Rubin wrote: >>> Grant Edwards <invalid@invalid.invalid> writes: >>>> Definitely. The M-class parts are so cheap, there's not much >>>> point in thinking about doing it in an FPGA. >>> >>> Well I think the idea is already you have other stuff in the FPGA, >>> so you save a package and some communications by dropping in a >>> softcore rather than using an external MCU.&nbsp; I'm surprised that >>> only high end FPGA's currently have hard MCU's already there.&nbsp; Just >>> like they have DSP blocks, ram blocks, SERDES, etc., they might as >>> well put in some CPU blocks. >> >> There's a chip that goes the other direction.&nbsp; The GA144 has 144 very >> fast, very tiny CPUs in an FPGA fashion with no FPGA fabric. >> >> It's not a popular chip because most people are invested in the >> single CPU, lots of memory paradigm and focusing their efforts on >> making a single CPU work like it's many CPUs doing many different >> things.&nbsp; Using this chip requires a very different mindset because of >> the memory size limitations which are inherent in the CPU design. > > Could it be that the chip is not popular because it is not a good fit for many > applications?&nbsp; A signal fast core is more flexible and useful than many small > cores, while an FPGA can do far more than the tiny CPUs in the GA144.&nbsp; You need > outstanding benefits of something like the GA144 before it makes sense to use it > - and that's not something I have seen when I have looked through the website > and read the information.&nbsp; It is not enough to simply be an interesting idea - > it is not even enough to be useful or better than alternatives.&nbsp; In order to > capture customers, it has to be so hugely better that it is worth the cost for > people to learn a very different type of architecture, a very different > programming language, and a very different way of looking at problems. > > And frankly, the GA144 is not impressive or better at all.&nbsp; Look at the > application notes and examples - you've got a 10 Mbit Ethernet controller that > requires a third of the hardware of the GA144.&nbsp; An MD5 hash takes over a tenth > of the hardware and has a speed similar to a cheaper microcontroller from the > year the GA144 was made, 2012.&nbsp; And it all requires programming in a language > where the /colour/ of the words is semantically and syntactically critical. > > It is nice to see people coming up with new ideas and new kinds of > architectures, but there has to be big benefits if it is going to be anything > more than an academic curiosity.&nbsp; GA144 is not it. > > (The XMOS is far more realistic and with greater potential - that's a family of > devices that people can, and do, use.)
Yes indeed. I was beguiled by the concept of the GA144, but felt the per-core limitations were too significant when compared with an FPGA or more conventional processor. I was neither put off nor attracted by programming it in Forth. OTOH those that think Forth is the bees knees will gravitate to the GA144. As you say, the XMOS /ecosystem/ is far more compelling, partly because it has excellent /integration/ between the hardware, the software and the toolchain. The latter two are usually missing.
On 17/04/2020 11:49, Tom Gardner wrote:
> On 17/04/20 09:02, David Brown wrote: >> On 17/04/2020 03:37, Rick C wrote: >>> On Thursday, April 16, 2020 at 8:13:45 PM UTC-4, Paul Rubin wrote: >>>> Grant Edwards <invalid@invalid.invalid> writes: >>>>> Definitely. The M-class parts are so cheap, there's not much >>>>> point in thinking about doing it in an FPGA. >>>> >>>> Well I think the idea is already you have other stuff in the FPGA, >>>> so you save a package and some communications by dropping in a >>>> softcore rather than using an external MCU.&nbsp; I'm surprised that >>>> only high end FPGA's currently have hard MCU's already there.&nbsp; Just >>>> like they have DSP blocks, ram blocks, SERDES, etc., they might as >>>> well put in some CPU blocks. >>> >>> There's a chip that goes the other direction.&nbsp; The GA144 has 144 very >>> fast, very tiny CPUs in an FPGA fashion with no FPGA fabric. >>> >>> It's not a popular chip because most people are invested in the >>> single CPU, lots of memory paradigm and focusing their efforts on >>> making a single CPU work like it's many CPUs doing many different >>> things.&nbsp; Using this chip requires a very different mindset because of >>> the memory size limitations which are inherent in the CPU design. >> >> Could it be that the chip is not popular because it is not a good fit >> for many applications?&nbsp; A signal fast core is more flexible and useful >> than many small cores, while an FPGA can do far more than the tiny >> CPUs in the GA144.&nbsp; You need outstanding benefits of something like >> the GA144 before it makes sense to use it - and that's not something I >> have seen when I have looked through the website and read the >> information.&nbsp; It is not enough to simply be an interesting idea - it >> is not even enough to be useful or better than alternatives.&nbsp; In order >> to capture customers, it has to be so hugely better that it is worth >> the cost for people to learn a very different type of architecture, a >> very different programming language, and a very different way of >> looking at problems. >> >> And frankly, the GA144 is not impressive or better at all.&nbsp; Look at >> the application notes and examples - you've got a 10 Mbit Ethernet >> controller that requires a third of the hardware of the GA144.&nbsp; An MD5 >> hash takes over a tenth of the hardware and has a speed similar to a >> cheaper microcontroller from the year the GA144 was made, 2012.&nbsp; And >> it all requires programming in a language where the /colour/ of the >> words is semantically and syntactically critical. >> >> It is nice to see people coming up with new ideas and new kinds of >> architectures, but there has to be big benefits if it is going to be >> anything more than an academic curiosity.&nbsp; GA144 is not it. >> >> (The XMOS is far more realistic and with greater potential - that's a >> family of devices that people can, and do, use.) > > Yes indeed. > > I was beguiled by the concept of the GA144, but felt the > per-core limitations were too significant when compared > with an FPGA or more conventional processor. > > I was neither put off nor attracted by programming it in > Forth. OTOH those that think Forth is the bees knees will > gravitate to the GA144.
Perhaps Rick knows of people who have actually used the devices? (While I have only programmed a little in Forth, and that was decades ago, I appreciate it is a language with a number of nice features. But I expect Forth programmers to prefer modern Forth tools - writing the code in files using a good editor, laying it out well, picking good names, adding sensible comments, using static analysis, etc. I can't see the appeal in a based on blocks designed to fit on floppy drives and screens that were outdated 30 years ago.)
> > As you say, the XMOS /ecosystem/ is far more compelling, > partly because it has excellent /integration/ between the > hardware, the software and the toolchain. The latter two > are usually missing.
Agreed. And the XMOS folk have learned and improved. With the first chips, they proudly showed off that you could make a 100 MBit Ethernet controller in software on an XMOS chip. Then it was pointed out to them that - impressive achievement though it was - it was basically useless because you didn't have the resources left to use it for much, and hardware Ethernet controllers were much cheaper. So they brought out new XMOS chips with hardware Ethernet controllers. The same thing happened with USB. There is a lot to like about XMOS devices and tools, but they still strike me as a solution in search of a problem. An elegant solution, perhaps, but still missing a problem. We used them for a project many years ago for a USB Audio Class 2 device. There simply were no realistic alternatives at the time, but I can't say the XMOS solution was a good one. The device has far too little memory to make sensible buffers (this still applies to XMOS devices, last I looked), and the software at the time was painful (this I believe has improved significantly). If we were making a new version of the product, we'd drop the XMOS device in an instant and use an off-the-shelf chip instead.
On 17/04/20 14:44, David Brown wrote:
> On 17/04/2020 11:49, Tom Gardner wrote: >> On 17/04/20 09:02, David Brown wrote: >>> On 17/04/2020 03:37, Rick C wrote: >>>> On Thursday, April 16, 2020 at 8:13:45 PM UTC-4, Paul Rubin wrote: >>>>> Grant Edwards <invalid@invalid.invalid> writes: >>>>>> Definitely. The M-class parts are so cheap, there's not much >>>>>> point in thinking about doing it in an FPGA. >>>>> >>>>> Well I think the idea is already you have other stuff in the FPGA, >>>>> so you save a package and some communications by dropping in a >>>>> softcore rather than using an external MCU.&nbsp; I'm surprised that >>>>> only high end FPGA's currently have hard MCU's already there.&nbsp; Just >>>>> like they have DSP blocks, ram blocks, SERDES, etc., they might as >>>>> well put in some CPU blocks. >>>> >>>> There's a chip that goes the other direction.&nbsp; The GA144 has 144 very >>>> fast, very tiny CPUs in an FPGA fashion with no FPGA fabric. >>>> >>>> It's not a popular chip because most people are invested in the >>>> single CPU, lots of memory paradigm and focusing their efforts on >>>> making a single CPU work like it's many CPUs doing many different >>>> things.&nbsp; Using this chip requires a very different mindset because of >>>> the memory size limitations which are inherent in the CPU design. >>> >>> Could it be that the chip is not popular because it is not a good fit for >>> many applications?&nbsp; A signal fast core is more flexible and useful than many >>> small cores, while an FPGA can do far more than the tiny CPUs in the GA144. >>> You need outstanding benefits of something like the GA144 before it makes >>> sense to use it - and that's not something I have seen when I have looked >>> through the website and read the information.&nbsp; It is not enough to simply be >>> an interesting idea - it is not even enough to be useful or better than >>> alternatives.&nbsp; In order to capture customers, it has to be so hugely better >>> that it is worth the cost for people to learn a very different type of >>> architecture, a very different programming language, and a very different way >>> of looking at problems. >>> >>> And frankly, the GA144 is not impressive or better at all.&nbsp; Look at the >>> application notes and examples - you've got a 10 Mbit Ethernet controller >>> that requires a third of the hardware of the GA144.&nbsp; An MD5 hash takes over a >>> tenth of the hardware and has a speed similar to a cheaper microcontroller >>> from the year the GA144 was made, 2012.&nbsp; And it all requires programming in a >>> language where the /colour/ of the words is semantically and syntactically >>> critical. >>> >>> It is nice to see people coming up with new ideas and new kinds of >>> architectures, but there has to be big benefits if it is going to be anything >>> more than an academic curiosity.&nbsp; GA144 is not it. >>> >>> (The XMOS is far more realistic and with greater potential - that's a family >>> of devices that people can, and do, use.) >> >> Yes indeed. >> >> I was beguiled by the concept of the GA144, but felt the >> per-core limitations were too significant when compared >> with an FPGA or more conventional processor. >> >> I was neither put off nor attracted by programming it in >> Forth. OTOH those that think Forth is the bees knees will >> gravitate to the GA144. > > Perhaps Rick knows of people who have actually used the devices? > > (While I have only programmed a little in Forth, and that was decades ago, I > appreciate it is a language with a number of nice features.&nbsp; But I expect Forth > programmers to prefer modern Forth tools - writing the code in files using a > good editor, laying it out well, picking good names, adding sensible comments, > using static analysis, etc.&nbsp; I can't see the appeal in a based on blocks > designed to fit on floppy drives and screens that were outdated 30 years ago.) > >> >> As you say, the XMOS /ecosystem/ is far more compelling, >> partly because it has excellent /integration/ between the >> hardware, the software and the toolchain. The latter two >> are usually missing. > > Agreed.&nbsp; And the XMOS folk have learned and improved.&nbsp; With the first chips, > they proudly showed off that you could make a 100 MBit Ethernet controller in > software on an XMOS chip.&nbsp; Then it was pointed out to them that - impressive > achievement though it was - it was basically useless because you didn't have the > resources left to use it for much, and hardware Ethernet controllers were much > cheaper.&nbsp; So they brought out new XMOS chips with hardware Ethernet > controllers.&nbsp; The same thing happened with USB.
It looks like a USB controller needs ~8 cores, which isn't a problem on a 16 core device :)
> There is a lot to like about XMOS devices and tools, but they still strike me as > a solution in search of a problem.&nbsp; An elegant solution, perhaps, but still > missing a problem.&nbsp; We used them for a project many years ago for a USB Audio > Class 2 device.&nbsp; There simply were no realistic alternatives at the time, but I > can't say the XMOS solution was a good one.&nbsp; The device has far too little > memory to make sensible buffers (this still applies to XMOS devices, last I > looked), and the software at the time was painful (this I believe has improved > significantly).&nbsp; If we were making a new version of the product, we'd drop the > XMOS device in an instant and use an off-the-shelf chip instead.
I certainly wouldn't want to comment on your use case. To me a large part of the attraction is that you can /predict/ the /worst/ case latency and jitter (and hence throughput), in a way that is difficult in a standard MCU and easy in an FPGA. To that extent it allows FPGA-like performance with "traditional" software development tools and methodologies. Plus a little bit of "thinking parallel" that everybody will soon /have/ to be doing :)
Paul Rubin <no.email@nospam.invalid> wrote:
> Grant Edwards <invalid@invalid.invalid> writes: > > Definitely. The M-class parts are so cheap, there's not much point in > > thinking about doing it in an FPGA. > > Well I think the idea is already you have other stuff in the FPGA, so > you save a package and some communications by dropping in a softcore > rather than using an external MCU. I'm surprised that only high end > FPGA's currently have hard MCU's already there. Just like they have DSP > blocks, ram blocks, SERDES, etc., they might as well put in some CPU > blocks.
I think part of the problem is the ARM licensing cost - if the license cost is (random number) 5% of the silicon sticker price that's fine when it's a $1 MCU, but when it's a $10000 FPGA that hurts. I understand all the Stratix 10 parts have the Cortex A53 on there but the licence fee is only paid when the ARM core is enabled. This explains why the SX versions of the FPGA are (or were initially) much harder to get hold of, because nobody wants to pay extra. RISC-V is likely to change this, because the licence costs can now be zero. However the additional question is what kind of core: there isn't a lot to be gained putting a 3-stage pipeline core in hard logic - it'll be small but the clock won't be very high, and the area is wasted for someone who doesn't need it. If you're interested in a high clock it gets more complex and starts dragging in caches etc - starts looking more like an application core. At which point you might as well pull in an MMU and have it run Linux, but then you need external memory, I/O and storage too. I think it would be hard for the synthesis toolchains to make good use of a CPU core, but there is value in having them there to be specially programmed, like SERDES or FPU blocks. Theo
On Friday, April 17, 2020 at 4:03:06 AM UTC-4, David Brown wrote:
> On 17/04/2020 03:37, Rick C wrote: > > On Thursday, April 16, 2020 at 8:13:45 PM UTC-4, Paul Rubin wrote: > >> Grant Edwards <invalid@invalid.invalid> writes: > >>> Definitely. The M-class parts are so cheap, there's not much > >>> point in thinking about doing it in an FPGA. > >> > >> Well I think the idea is already you have other stuff in the FPGA, > >> so you save a package and some communications by dropping in a > >> softcore rather than using an external MCU. I'm surprised that > >> only high end FPGA's currently have hard MCU's already there. Just > >> like they have DSP blocks, ram blocks, SERDES, etc., they might as > >> well put in some CPU blocks. > > > > There's a chip that goes the other direction. The GA144 has 144 very > > fast, very tiny CPUs in an FPGA fashion with no FPGA fabric. > > > > It's not a popular chip because most people are invested in the > > single CPU, lots of memory paradigm and focusing their efforts on > > making a single CPU work like it's many CPUs doing many different > > things. Using this chip requires a very different mindset because of > > the memory size limitations which are inherent in the CPU design. > > Could it be that the chip is not popular because it is not a good fit > for many applications? A signal fast core is more flexible and useful > than many small cores, while an FPGA can do far more than the tiny CPUs > in the GA144.
You are the poster child for my statement about mindset. A single, fast CPU is harder to program than many, fast CPUs. Programmers have to learn a lot in order to perform multitasking on a single CPU. It's a difficult and error prone design exercise. Transition that to many CPUs and the vast majority of those problems go away. I recall one designers complaining there was not nearly enough I/O capability to keep all the CPUs supplied with data, completely missing the idea that the CPUs are no longer a precious resource that you must squeeze every last drop of performance from.
> You need outstanding benefits of something like the GA144 > before it makes sense to use it - and that's not something I have seen > when I have looked through the website and read the information. It is > not enough to simply be an interesting idea - it is not even enough to > be useful or better than alternatives. In order to capture customers, > it has to be so hugely better that it is worth the cost for people to > learn a very different type of architecture, a very different > programming language, and a very different way of looking at problems.
Yes, you are agreeing with me I think. Designers are comfortable with the present paradigm and have trouble even conceiving of how to use this device. That's not the same thing as the device not being suitable for applications... although it is definitely not a shoe that fits every foot. However, there are many, many uses for it. One area where it fits very well is a signal processing app where data flows through the chip. This would be a nearly perfect match to the architecture allowing even a designer with little imagination to use the device. I considered some apps for this device. One was an attached oscilloscope. I believe a design would suit this device pretty well. My preference would be to ship the data over USB to a PC for display which would require a USB interface which no one has yet developed, but connecting to an attached display would be simple. An external memory would only be needed if advanced processing functions were required. Most could be done right on the chip.
> And frankly, the GA144 is not impressive or better at all. Look at the > application notes and examples - you've got a 10 Mbit Ethernet > controller that requires a third of the hardware of the GA144. An MD5 > hash takes over a tenth of the hardware and has a speed similar to a > cheaper microcontroller from the year the GA144 was made, 2012. And it > all requires programming in a language where the /colour/ of the words > is semantically and syntactically critical.
Yes, very clearly showing your bias and lack of imagination. I remember the first time I used an IDE for code development. The damn thing barely worked and was horribly slow. I was happy with manually tracing code on printouts. lol
> It is nice to see people coming up with new ideas and new kinds of > architectures, but there has to be big benefits if it is going to be > anything more than an academic curiosity. GA144 is not it. > > (The XMOS is far more realistic and with greater potential - that's a > family of devices that people can, and do, use.)
Ah, yes, the ever so niched XMOS. An expensive replacement for the many faster and simpler devices that everyone is comfortable with. These devices have the same issues of user comfort the GA144 has and on top of that are only suited to a small niche where they have any real advantage over CPUs or FPGAs. I believe we've had this discussion before.
> > That said, FPGA makers are rather stuck in the -larger is better- rut > > mostly because their bread and butter customer base, the comms > > vendors, want large FPGAs with fast ARM processors if any. > > It is those customers that pay for the development of the whole thing. > Small customers then get the benefits of that.
Not if FPGAs that suit them are never developed. That's the problem. The two large FPGA makers aren't interested in the (I assume) smaller markets which require smaller devices in smaller, easy to use packages, potentially combined with CPUs such as ARM CM4 or RISC-V. I have seen devices from Microchip with a RISC-V but they are typically far too pricey to end up in any of my designs. This device will be out later this year. -- Rick C. --+ Get 1,000 miles of free Supercharging --+ Tesla referral code - https://ts.la/richard11209
On Friday, April 17, 2020 at 4:38:04 AM UTC-4, David Brown wrote:
> On 17/04/2020 06:54, Clifford Heath wrote: > > On 17/4/20 1:01 pm, Rick C wrote: > >> On Thursday, April 16, 2020 at 10:35:07 PM UTC-4, Clifford Heath wrote: > >>> > >>> Some US language is ancient English (but modern English has moved on), > >>> and sometimes its the reverse. "Aluminium/Aluminum" is an example where > >>> English moved on (to improve standardisation). > >> > >> Sorry, can you explain the aluminium/aluminum thing?&nbsp; I know some > >> people pronounce it with an accent (not saying who) but I don't get > >> the English moved on thing. > > > > Aluminum is the original name, which Americans retained when the English > > decided to standardise on the -ium extension that was being used with > > most other metals already. > > > > That's my understanding anyhow. > > > > CH > > Yes, that is correct (AFAIK). This is one of the differences between > spoken English and spoken American that always annoys me when I hear it > - I don't really know why, and of course it is unfair and biased. The > other one that gets me is when Americans pronounce "route" as "rout" > instead of "root". A "rout" is when one army chases another army off > the battlefield, or a groove cut into a piece of wood. It is not > something you do with a network packet or pcb track! > > I'm sure Americans find it equally odd or grating when they hear British > people "rooting" pcbs and network packets. > > :-)
I've seen the word "rooted" used in a much more vulgar sense in many British works to think you don't know why that just sounds wrong when applied to PWBs. -- Rick C. -+- Get 1,000 miles of free Supercharging -+- Tesla referral code - https://ts.la/richard11209
On 17/04/2020 16:23, Tom Gardner wrote:
> On 17/04/20 14:44, David Brown wrote: >> On 17/04/2020 11:49, Tom Gardner wrote: >>> On 17/04/20 09:02, David Brown wrote:
>>> As you say, the XMOS /ecosystem/ is far more compelling, >>> partly because it has excellent /integration/ between the >>> hardware, the software and the toolchain. The latter two >>> are usually missing. >> >> Agreed.&nbsp; And the XMOS folk have learned and improved.&nbsp; With the first >> chips, they proudly showed off that you could make a 100 MBit Ethernet >> controller in software on an XMOS chip.&nbsp; Then it was pointed out to >> them that - impressive achievement though it was - it was basically >> useless because you didn't have the resources left to use it for much, >> and hardware Ethernet controllers were much cheaper.&nbsp; So they brought >> out new XMOS chips with hardware Ethernet controllers.&nbsp; The same thing >> happened with USB. > > It looks like a USB controller needs ~8 cores, which isn't > a problem on a 16 core device :) >
I've had another look, and I was mistaken - these devices only have the USB and Ethernet PHYs, not the MACs, and thus require a lot of processor power, pins, memory and other resources. It doesn't need 8 cores, but the whole thing just seems so inefficient. No one is going to spend the extra cost for an XMOS with a USB PHY, so why not put a hardware USB controller on the chip? The silicon costs would surely be minor, and it would save a lot of development effort and release resources that are useful for other tasks. The same goes for Ethernet. Just because you /can/ make these things in software on the XMOS devices, does not make it a good idea. Overall, the thing that bugs me about XMOS is that you can write very simple, elegant tasks for the cores to do various tasks. But when you do that, you run out of cores almost immediately. So you have to write your code in a way that implements your own scheduler, losing a major part of the point of the whole system. Or you use the XMOS FreeRTOS port on one of the virtual cores - in which case you could just switch to a Cortex-M microcontroller with hardware USB, Ethernet, PWM, UART, etc. and a fraction of the price. If the XMOS devices and software had a way of neatly multi-tasking /within/ a single virtual core, while keeping the same kind of inter-task communication and other benefits, then they would have something I could see being very nice.
> >> There is a lot to like about XMOS devices and tools, but they still >> strike me as a solution in search of a problem.&nbsp; An elegant solution, >> perhaps, but still missing a problem.&nbsp; We used them for a project many >> years ago for a USB Audio Class 2 device.&nbsp; There simply were no >> realistic alternatives at the time, but I can't say the XMOS solution >> was a good one.&nbsp; The device has far too little memory to make sensible >> buffers (this still applies to XMOS devices, last I looked), and the >> software at the time was painful (this I believe has improved >> significantly).&nbsp; If we were making a new version of the product, we'd >> drop the XMOS device in an instant and use an off-the-shelf chip instead. > > I certainly wouldn't want to comment on your use case.
As I said, it was a while ago, when XMOS were relatively new - I assume the software, libraries and examples are better now than at that time. But for applications like ours, you can just get a CMedia chip and wire it up - no matter how good XMOS tools have become, they don't beat that. (And then all the development budget can be spent on trying to get drivers to work on idiotic Windows systems...)
> > To me a large part of the attraction is that you can > /predict/ the /worst/ case latency and jitter (and hence > throughput), in a way that is difficult in a standard MCU > and easy in an FPGA.
For standard MCU's, you aim to do this by using hardware peripherals (timers, PWM blocks, communication controllers, etc.) for the most timing-critical stuff. Then you don't need it in the software.
> > To that extent it allows FPGA-like performance with "traditional" > software development tools and methodologies. Plus a little > bit of "thinking parallel" that everybody will soon /have/ to > be doing :)
It's a nice idea, and I'm sure XMOS has some good use-cases. But I can't help feeling they have something that is /almost/ a good system - with a bit more, they could be very much more useful.