EmbeddedRelated.com
Forums
Memfault Beyond the Launch

Re: cool article, interesting quote

Started by Spehro Pefhany April 21, 2006
On Thu, 20 Apr 2006 20:26:38 -0700, John  Larkin
<jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

>At something like 14 klines and lots going on, it got a bit hairy. We >used an Xport module for the ethernet option. Once TCP/IP stacks get >involved internally, I concede the language debate. >John
You could still program the parts you want to in assembly and just link in the TCP/IP, GUI or whatever HLL code. Particularly for the ISRs. It's more like an analog or fuzzy variable rather than a binary choice. Best regards, Spehro Pefhany -- "it's the network..." "The Journey is the reward" speff@interlog.com Info for manufacturers: http://www.trexon.com Embedded software/hardware/analog Info for designers: http://www.speff.com
On Thu, 20 Apr 2006 21:32:50 -0700, "Joel Kolstad"
<JKolstad71HatesSpam@yahoo.com> wrote:

>"John Larkin" <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote in message >news:57kg42dn7j164qkeeurs109sgt723psul8@4ax.com... >> This was also programmed in assembly, and arguably shouldn't have >> been: >> >> http://www.highlandtechnology.com/DSS/P400DS.html > >Would using a high-level language have allowed it to display lower case? :-) >
The Help system is upper/lowercase, and has hyperjumps. John
On Fri, 21 Apr 2006 13:22:09 -0700, John Larkin
<jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

><snip>
>Maybe we should change the way we look at processors. Why not have 256 >or 1024 CPUs, with a CPU per process/interrupt/thread?
I suppose there are times when that works well. A price to pay is communication issues. However, I don't see the problem with context switching, either. So if there is enough performance in a cpu, you can achieve pretty close to the same thing without lots cpus and associated communications paths.
>No context switching, no real interrupts as such. Some of the CPUs could be >lightweights, low-power blocks to run slow, dumb stuff, and some could >be real number-crunchers.
It takes me only a few minutes to write a cooperative task switch in assembly on most processors. An hour to test it, thoroughly. In less than a day, I can (from scratch) set up and support process creation for any valid C function (with parameter passing to start it), process destruction, precise sleeping with delta queues, safe message passing, and per-process exception handling (should that be desired, at all.) In C or assembly. (If you need pre-emption, that takes more resource and time and adds some risk regarding pre-existing libraries.) But context switching isn't really much of a barrier to overcome.
>The OS resides on a "manager" CPU, and assigns tasks to resource CPUs. >It never executes application code, has no drivers except maybe a >console interface, never gets exposed to nasty virus-laden JPEG files, >and, one booted, its memory space is not accessable by any other >process, even DMA.
Transputer comes to mind, the concept of RPC (remote procedure calls), etc. Some folks were working on an operating system that would automatically start out a full application running on a single CPU, but diffuse out the routines dynamically over a network of CPUs in a kind of simulated annealing process. Starts out CPU bound on the 1st CPU, notices that the communications bands are empty, decides to use the communication link to "send" code then starts passing parameters to the function calls that way. Eventually, the 1st processor decides that it's cpu-load and communications exchanges are in equilibrium -- occasionally accepting code from nearby processors, occasionally sending out code to nearby processors, always passing parameters around, etc. Each nearby processor doing about the same thing. Eventually, it all evens out -- never static, sometimes sending out a routine on a random basis, sometimes receiving one. No idea what happened with the idea.
>And isn't it time we got rid of virtual memory?
I like it when running NT-based Windows -- it helps isolate those darned programs from crashing each other. It also helps on such workstations so that each program can be compiled to a "standard" view of the computer system and in virtualizing resources so that each program can take a simplified view of them. For instrumentation? Probably not so good. Jon
On Fri, 21 Apr 2006 13:22:09 -0700, John Larkin
<jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

> >Maybe we should change the way we look at processors. Why not have 256 >or 1024 CPUs, with a CPU per process/interrupt/thread? No context >switching, no real interrupts as such. Some of the CPUs could be >lightweights, low-power blocks to run slow, dumb stuff, and some could >be real number-crunchers.
Assigning a processor for each device ("interrupt") would really be a nice things, since now you could write simple busy-loop style programs constantly polling the HW registers instead of writing complex interrupt handlers that must restore and save the whole "program" state at every interrupt, making the code quite complex and thus error prone. In fact this is not anything new, look at the IBM mainframes, which contained a lot of I/O processors in the SNA architecture. For instance the remote terminal concentrators contained much of the intelligence of the block mode terminals, which only sent the modified field to the mainframe when you hit the Send button, a predecessor to HTML forms :-). Anyway, the raw computational power of many IBM mainframes was quite minimal for that era, however, trying to port these mainframe applications to single CPU platforms, such as VAXes, ended in many cases to the I/O bottle neck, since the main CPU had to deal with file system management and indexed file processing and file (de)compression. Since it is currently possible to integrate a huge number of transistors into a chip and the main problem seems to be how to use them effectively, I would welcome the idea of using dedicated trivial processors for specialised tasks, such as I/O. Paul
On Fri, 21 Apr 2006 08:40:33 -0700, "Joel Kolstad"
<JKolstad71HatesSpam@yahoo.com> wrote:

>"John Larkin" <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote in message >news:neuh425egj3ue3ffrf1b4dakq680fgl5c7@4ax.com... >> Hey, all I do is design and build and sell very-high-margin aerospace >> electronics. Lots of it. What the hell does inheriting classes of pure >> virtual member functions have to do with that? > >It's what you're going to be hearing from all those new college graduates / >would-be employees of yours, John. :-) To many people all those buzzwords >sound more impressive than, "I can build you a 1GHz clock source with 25ps RMS >jitter..." >
I met a guy in this ng, a fairly recent ee grad, who complained that most of his college courses were too abstract, cs and digital theory and stuff, and that he took to hanging out with the tech who maintained the labs, so that he could learn some real electronics. I hired him, of course. John
Paul Keinanen wrote:

> Since it is currently possible to integrate a huge number of > transistors into a chip and the main problem seems to be how to use > them effectively, I would welcome the idea of using dedicated trivial > processors for specialised tasks, such as I/O. >
This is done now to a small extent with the MPC555X processors, which have multiple simple RISC processors (eTPU's) running concurrently with the main power pc processor all on one chip, there is only two or three eTPU's so they are replacing specialized I/O hardware for now and work very well. But what if I had 100 or 1000 eTPUs (or ARM's) ? I'm not sure how I would use them because I don't know how to "think" that way, but the concept certainly is interesting. I suppose one option is to used each processor to run a software module, all code is now running almost instantaneously at the same time, and for all practical purposes I'm designing like a hardware engineer now.
On Sat, 22 Apr 2006 10:31:13 +0300, Paul Keinanen <keinanen@sci.fi>
wrote:

>Since it is currently possible to integrate a huge number of >transistors into a chip and the main problem seems to be how to use >them effectively, I would welcome the idea of using dedicated trivial >processors for specialised tasks, such as I/O. > >Paul
The good news is that there's a dedicated processor for each pin. The bad news is that all 354 of them are low end PICs. Best regards, Spehro Pefhany -- "it's the network..." "The Journey is the reward" speff@interlog.com Info for manufacturers: http://www.trexon.com Embedded software/hardware/analog Info for designers: http://www.speff.com
In article <6d3k42p4dro5fogr1omorkhu54u8812mp5@4ax.com>, 
speffSNIP@interlogDOTyou.knowwhat says...

> The good news is that there's a dedicated processor for each pin. The > bad news is that all 354 of them are low end PICs. > > Best regards, > Spehro Pefhany
I've had an urge for a long time to build a board with a 12x12 or 16x16 grid of 8 pin PIC or AVR controllers. :) I just don't know what I would do with it afterwards.
John Larkin wrote:
> > I met a guy in this ng, a fairly recent ee grad, who complained that > most of his college courses were too abstract, cs and digital theory > and stuff, and that he took to hanging out with the tech who > maintained the labs, so that he could learn some real electronics. I > hired him, of course.
When I was in school, I used to fight one of the lab techs (who'd retired from Hughs) for the cast-off T*M gear. He was fun to hang out with; he had a full Rad Lab set. -- newell N5TNL

Jonathan Kirwan wrote:

> > Transputer comes to mind, the concept of RPC (remote procedure calls), > etc. Some folks were working on an operating system that would > automatically start out a full application running on a single CPU, > but diffuse out the routines dynamically over a network of CPUs in a > kind of simulated annealing process. Starts out CPU bound on the 1st > CPU, notices that the communications bands are empty, decides to use > the communication link to "send" code then starts passing parameters > to the function calls that way. Eventually, the 1st processor decides > that it's cpu-load and communications exchanges are in equilibrium -- > occasionally accepting code from nearby processors, occasionally > sending out code to nearby processors, always passing parameters > around, etc. Each nearby processor doing about the same thing. > Eventually, it all evens out -- never static, sometimes sending out a > routine on a random basis, sometimes receiving one. > > No idea what happened with the idea. >
Have you got any references on this? I'd be very grateful for any pointer or direction.

Memfault Beyond the Launch