EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Re: cool article, interesting quote

Started by John Larkin April 21, 2006
On 21 Apr 2006 08:38:45 -0700, "Didi" <dp@tgi-sci.com> wrote:

>> http://www.highlandtechnology.com/DSS/P400DS.html >> >> At something like 14 klines and lots going on, it got a bit hairy. > >14k lines in a single file is perhaps a bit too many, I tend to keep >large source files between 1 and 3k lines. But then I do use a linker >:-). > >> Once TCP/IP stacks get >> involved internally, I concede the language debate. > >If language means "single source file" vs. the rest of the options, >I guess you will have to. But it by no means has to be C. >My tcp/ip subsystem (I have yet to understand why people refer >to this as a "stack"...) is written in VPA; about 1.5 M source text, >(in about 150 files) DNS, FTP, SMTP included, the PPC code >size is somewhat below 200 kilobytes.... That's if it connects >via ppp, the ethernet takes another 30 kilobytes of code (and >more - configurable to much more - buffer space, obviously). >I wonder how these figures compare to other - C written - similar >things. > >I also wonder what CPU did you use in your older CAMAC boards, >just curious about your background. I have not been doing CAMAC >so there is no direct competition here :-). Nor do I have TACs etc., >and you seem to not be making MCAs ... :-). > >Dimiter >
Hi, Dimiter, When I moved to California, and needed a job to support my intended lifestyle, I started working for Standard Engineering in Fremont, a CAMAC house. They later absorbed Transiac, changed their name to DSP Technology, went public, got into automotive testing, and moved to Detroit. When I started my little company, I had some contacts at the national labs (Los Alamos, LLNL) so I did CAMAC for them until it sort of died. The last CAMAC module we did was an ethernet crate controller, a bridge to allow people to keep using older CAMAC crates but dump the VAXes and serial crate controllers and stuff. We stuffed a PC/104 CPU and a standard Ethernet card inside the crate controller module, with a simple FPGA interface to the dataway. I had a friend (PhD in thermal hydraulics, now a fulltime programmer) write the internals in ansi C, using a public-domain C-source TCP/IP stack. It runs under rom-dos, and the executable is about 75K bytes. He's good. Some of the really old CAMAC stuff used an MC6803 CPU. Lately we use 68332's, often with serious databashing help from Xilinx FPGAs. It's impressive what you can do with a hundred parallel 50-MHz multipliers and adders. We do mostly VME and OEM boxes - NMR, lasers, ICCD cameras, occasional big-physics projects (Jlabs, SSC, NIF), stuff like that - lately, and some fiberoptics, anything that's hard. Want a refrigerator magnet? John
On Fri, 21 Apr 2006 17:26:35 -0400, Phil Hobbs
<pcdh@SpamMeSenseless.pergamos.net> wrote:

>> Maybe we should change the way we look at processors. Why not have 256 >> or 1024 CPUs, with a CPU per process/interrupt/thread? No context >> switching, no real interrupts as such. Some of the CPUs could be >> lightweights, low-power blocks to run slow, dumb stuff, and some could >> be real number-crunchers. > >The IBM Cell processor used in advanced game machines has one Power 6 >core that controls 8 Synergistic Processors, arranged on a (iirc) >768-bit wide ring bus that clocks at 2.5 GHz (half the processor speed), >with 1-cycle latency per step on the ring. It's about 1.5 Tb/s, not >counting ECC bits. The SPs are single-instruction-multiple-data (SIMD), >like an old Cray. Most machines have lots of userspace threads running >at once, which is one reason that multicore designs improve throughput >per transistor. It's the bigger jobs that really suffer, the ones you'd >really like to run on some hellacious fast uniprocessor, but can't. > >You'll see more of that sort of thing--but SIMD machines aren't good for >everything, the way SMPs are. > >> The OS resides on a "manager" CPU, and assigns tasks to resource CPUs. >> It never executes application code, has no drivers except maybe a >> console interface, never gets exposed to nasty virus-laden JPEG files, >> and, one booted, its memory space is not accessable by any other >> process, even DMA. >> >> And isn't it time we got rid of virtual memory? > >Paging to disc is helpful to prevent big important things from crashing >as soon as you run out of physical memory. It slows the machine to a >crawl, of course, but that gives me the chance to kill RealPlayer or >whatever's hogging all the RAM. You can turn it off if you like, but >it's a nice safety feature and it doesn't cost much if you're not using it. >
Virtual has justified itself. It's the thing that allowed unlimited code bloat, that allowed a word processor or an image viewer and their dll's to hit 100 megabytes or more. Word is slow, buggy, and infuriating. EDIT.COM is a faster and far superior text editor, at 62 kilobytes. (But it ought to be good... Microsoft contracted it out.) When vm was announced for the s/360, IBM was publicly predicting paging ratios of 200:1. A few years later, a survey showed that the average site was running 1.2:1. Meanwhile, the price of core had dropped dramatically, to under $50,000 per megabyte. John
On Sun, 23 Apr 2006 18:05:08 GMT, Scott Newell <newell@cei.net> wrote:

>John Larkin wrote: >> >> I met a guy in this ng, a fairly recent ee grad, who complained that >> most of his college courses were too abstract, cs and digital theory >> and stuff, and that he took to hanging out with the tech who >> maintained the labs, so that he could learn some real electronics. I >> hired him, of course. > >When I was in school, I used to fight one of the lab techs (who'd >retired from Hughs) for the cast-off T*M gear. He was fun to >hang out with; he had a full Rad Lab set.
I have the full RadLab set, but some are the ugly grey reprints. On the other hand, I have some dups of the maroon ones. If anybody else is in the same boat, maybe we could arrange some swaps. John
On Thu, 27 Apr 2006 17:56:52 +1200, Judges1318
<account@beaurat.at.hotpop.stop.com> wrote:

>Jonathan Kirwan wrote: > >> Transputer comes to mind, the concept of RPC (remote procedure calls), >> etc. Some folks were working on an operating system that would >> automatically start out a full application running on a single CPU, >> but diffuse out the routines dynamically over a network of CPUs in a >> kind of simulated annealing process. Starts out CPU bound on the 1st >> CPU, notices that the communications bands are empty, decides to use >> the communication link to "send" code then starts passing parameters >> to the function calls that way. Eventually, the 1st processor decides >> that it's cpu-load and communications exchanges are in equilibrium -- >> occasionally accepting code from nearby processors, occasionally >> sending out code to nearby processors, always passing parameters >> around, etc. Each nearby processor doing about the same thing. >> Eventually, it all evens out -- never static, sometimes sending out a >> routine on a random basis, sometimes receiving one. >> >> No idea what happened with the idea. > >Have you got any references on this? >I'd be very grateful for any pointer or direction.
I can remember having some discussions with folks at Apollo, who were working on something somewhat similar but not as aggressive and not for as many cpus. It was designed for local networking. They had a compiler that would granularize the code for that similar use. The call interface between granules was handled by the network layer and if the granule was on the same processor it just turned into a regular call but if the granule was on a different machine then it went out via the network. I'm not sure how automatic moving the granules around might have been. However, discussing these details with the O/S designers got my mind rolling and I suggested an idea I had for an approach that would be similar to simulated annealing -- in that a cpu in a matrix network of transputers would start out at a "high temperature" where processes would be sent out to adjacent processors or accepted from them even when the direction of doing that would be "uphill" (slows down the computation rate), because doing so might allow the overall system to find a new "pocket" of resolving the diffusion where processing happens even faster. But then the temperature would be gradually lowered so that cpus would be far less likely to ship out granules if they knew in advance that the communication burden would probably have a too-high cost. Eventually, the system winds down and is more conservative. But it starts out aggressive and allows itself to make choices that slow things down while it searches the topology space for a better "pocket" to later drop down into as it "cools." We discussed some of the details of how to actually make this work, at length. Then they pointed out to me that although the annealing idea itself hadn't been considered, it was a nifty approach, and there was another team elsewhere working on something like what I had imagined, and that it was on transputers. The method of diffusing out the granules was statically done at the time, but I was told they were considering approaches to doing it at run-time. I never did find out who or how far they got -- but the fact of it did seem to confirm that my imagination was okay. And I really enjoyed hashing out some of the details of how to make such an idea work in practice. But my interest was more a hobby than serious. I never took it further. Surprisingly, I later discovered that my cousin, David C DiNucci, had been working much more seriously on ideas in similar areas at NASA. If you look him up, you might see something of interest. Or write him and ask. (He and I had no prior discussions on this topic and I had had no idea whatsoever that he was doing any work in this area until I read a paper of his, published in a book on a signal processing gathering that I was studying. I was shocked to see his name. Shocked further to see what he was writing about. Where I had a modest, natural inclination, he has taken the subject area for his Ph.D. He even has spent time working with transputers, as well.) Jon
Jonathan Kirwan <jkirwan@easystreet.com> writes:

> > But my interest was more a hobby than serious. I never took it > further. Surprisingly, I later discovered that my cousin, David C > DiNucci, had been working much more seriously on ideas in similar > areas at NASA. If you look him up, you might see something of > interest. Or write him and ask.
He posts about it and related ideas on comp.arch Interesting stuff. cheers, Rich. -- rich walker | Shadow Robot Company | rw@shadow.org.uk technical director 251 Liverpool Road | need a Hand? London N1 1LX | +UK 20 7700 2487 www.shadow.org.uk/products/newhand.shtml

The 2024 Embedded Online Conference