EmbeddedRelated.com
Forums

exposing resource usage

Started by Don Y April 13, 2017
On 14.4.2017 г. 15:33, upsidedown@downunder.com wrote:
> On Fri, 14 Apr 2017 12:53:54 +0300, upsidedown@downunder.com wrote: > > >> These days the hardware is so cheap that for a RT / high reliability >> system, I recommend 40-60 % usage of CPU channels and communications >> links. Going much higher than that, is going to cause problems sooner >> or later. > > I just realized how old I am (still one of the youngest in CAE and > especially SED newsgroups). During my career in various forms of > computing, the prace/performance has been improved by a ratio one to > a million, depending on how you interpret the Moore's law (is the > price/performance ratio doubling every 18 or 24 months). With such > huge rations, it is cost effective to do things in one way and other > 2-4 years in a completely different way. > > Things that required dedicated designs and optimization in the past > does not make sense these days, unless you are making several million > copies and want to save a single cent from the production cost. > > For low volume products, it doesn't make sense to use too much > optimization these days. Thus a person with long experience really > needs to think, how much "clever" features are used. > >
The good thing about aging is that we don't notice it a lot ourselves as long as we are healthy. The outside world takes care of keeping us up to date of course... Hardware has always been ahead of software and as hardware becomes faster for the same tasks done 30+ years ago the gap is allowed to widen - to scary dimensions I would say. But this is how evolution works I guess, eventually some balance will be reached. Not that we have that moment in sight as far as I can see. Dimiter
On 4/14/2017 10:54 AM, Tim Wescott wrote:
> On Thu, 13 Apr 2017 18:22:05 -0700, Don Y wrote: > >> Is there any potential downside to intentionally exposing (to a >> task/process/job/etc.) its current resource commitments? >> I.e., "You are currently holding X memory, Y CPU, Z ..." >> >> *If* the job is constrained to a particular set of quotas, then knowing >> *what* it is using SHOULDN'T give it any "exploitable" information, >> should it? >> >> [Open system; possibility for hostile actors] > > It's one less barrier to some outside actor getting that information, and > therefor using it in an attack (if I know to do something that might make > a task overflow its stack, for instance, I'll have something concrete to > try to help me break in).
Of course. The challenge in the design of an OPEN system is coming to a balance between what you do *for* the developer (to allow him to more efficiently design more robust applications) vs. the "levers" that you can unintentionally expose to a developer. In *closed* systems, the system design can tend to assume the developers are not malicious; that every "lever" thus provided is exploited to improve cost, performance, etc. Any flaws in the resulting system are consequences of developer "shortcomings". In an open system, you have all the same possibilities -- PLUS the possibility of a malicious developer (or user!) exploiting one of those levers in a counterproductive manner.
> How hard are you going to work to keep that information out of the hands > of outside actors?
The only way to completely prevent exploits is to completely deny access. But, that's contrary to the goal of an open system.
Hi Dimiter,

On 4/14/2017 11:26 AM, Dimiter_Popoff wrote:
> On 14.4.2017 г. 15:33, upsidedown@downunder.com wrote: > >> I just realized how old I am (still one of the youngest in CAE and >> especially SED newsgroups). During my career in various forms of >> computing, the prace/performance has been improved by a ratio one to >> a million, depending on how you interpret the Moore's law (is the >> price/performance ratio doubling every 18 or 24 months). With such >> huge rations, it is cost effective to do things in one way and other >> 2-4 years in a completely different way. >> >> Things that required dedicated designs and optimization in the past >> does not make sense these days, unless you are making several million >> copies and want to save a single cent from the production cost. >> >> For low volume products, it doesn't make sense to use too much >> optimization these days. Thus a person with long experience really >> needs to think, how much "clever" features are used. > > The good thing about aging is that we don't notice it a lot ourselves as > long as we are healthy. The outside world takes care of keeping us up > to date of course...
...*if* you let it!
> Hardware has always been ahead of software and as hardware becomes > faster for the same tasks done 30+ years ago the gap is allowed to > widen - to scary dimensions I would say. But this is how evolution works > I guess, eventually some balance will be reached. Not that we have > that moment in sight as far as I can see.
Its unfair to suggest that software hasn't ALSO evolved/improved (in terms of "concepts per design" or some other bogo-metric). One can do things in software, now, "in an afternoon" that would have taken weeks/months/years decades ago. And, with a greater "first pass success rate"! The thing that has been slowest to evolve is the meatware driving the software design methodologies. It's (apparently?) too hard for folks to evolve their mindsets as fast as the hardware/software technologies. Too easy ("comforting/reassuring"?) to cling to "the old way" of doing things -- even though that phraseology (the OLD way) implicitly acknowledges that there *are* NEW way(s)!
On 14.4.2017 г. 22:45, Don Y wrote:
> Hi Dimiter, >
Hi Don,
> On 4/14/2017 11:26 AM, Dimiter_Popoff wrote: >> On 14.4.2017 г. 15:33, upsidedown@downunder.com wrote: >> >>> I just realized how old I am (still one of the youngest in CAE and >>> especially SED newsgroups). During my career in various forms of >>> computing, the prace/performance has been improved by a ratio one to >>> a million, depending on how you interpret the Moore's law (is the >>> price/performance ratio doubling every 18 or 24 months). With such >>> huge rations, it is cost effective to do things in one way and other >>> 2-4 years in a completely different way. >>> >>> Things that required dedicated designs and optimization in the past >>> does not make sense these days, unless you are making several million >>> copies and want to save a single cent from the production cost. >>> >>> For low volume products, it doesn't make sense to use too much >>> optimization these days. Thus a person with long experience really >>> needs to think, how much "clever" features are used. >> >> The good thing about aging is that we don't notice it a lot ourselves as >> long as we are healthy. The outside world takes care of keeping us up >> to date of course... > > ...*if* you let it!
well we can choose to ignore the reminders of course and we do it - to the extent possible :). Staying busy doing new things is the best recipe I know of.
> >> Hardware has always been ahead of software and as hardware becomes >> faster for the same tasks done 30+ years ago the gap is allowed to >> widen - to scary dimensions I would say. But this is how evolution works >> I guess, eventually some balance will be reached. Not that we have >> that moment in sight as far as I can see. > > Its unfair to suggest that software hasn't ALSO evolved/improved > (in terms of "concepts per design" or some other bogo-metric).
Oh I am not saying that, of course software has also evolved. Just not at the same quality/pace ratio, pace might have been even higher than with hardware.
> One can do things in software, now, "in an afternoon" that would have > taken weeks/months/years decades ago. And, with a greater "first pass > success rate"!
Yes of course. But this is mainly because we still have to do more or less the same stuff we did during the 80-s having resources several orders of magnitude faster. I am not saying this is a bad thing, we all do what is practical; what I am saying is that there is a lot of room for software to evolve in terms of efficiency. For example todays systems use gigabytes of RAM most of which stay untouched for ages, this is a resource evolution will eventually find useful things for.
> > The thing that has been slowest to evolve is the meatware driving > the software design methodologies. It's (apparently?) too hard for > folks to evolve their mindsets as fast as the hardware/software > technologies. Too easy ("comforting/reassuring"?) to cling to > "the old way" of doing things -- even though that phraseology > (the OLD way) implicitly acknowledges that there *are* NEW way(s)! >
Well this is part of life of course. But I think what holds things back most is the sheer bulkiness of the task of programming complex things, we are almost at a point where no single person can see the entire picture (in fact in almost all cases there is no such person). As an obvious consequence things get messy just because too many people are involved. I suppose until we reach a level where software will evolve on its own things will stay messy and probably get messier than they are today. Not sure how far we are from this point, may be not too far. I don't have anything like that working at my desk of course but I can see how I could pursue it based on what I already have - if I could afford the time to try. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
Hi Dimiter,

On 4/14/2017 2:27 PM, Dimiter_Popoff wrote:
>> The thing that has been slowest to evolve is the meatware driving >> the software design methodologies. It's (apparently?) too hard for >> folks to evolve their mindsets as fast as the hardware/software >> technologies. Too easy ("comforting/reassuring"?) to cling to >> "the old way" of doing things -- even though that phraseology >> (the OLD way) implicitly acknowledges that there *are* NEW way(s)! > > Well this is part of life of course. But I think what holds things > back most is the sheer bulkiness of the task of programming complex > things, we are almost at a point where no single person can see the > entire picture (in fact in almost all cases there is no such person). > As an obvious consequence things get messy just because too many people > are involved. > I suppose until we reach a level where software will evolve on its > own things will stay messy and probably get messier than they are > today. Not sure how far we are from this point, may be not too > far. I don't have anything like that working at my desk of course > but I can see how I could pursue it based on what I already have - if > I could afford the time to try.
I think that last statement goes to the heart of the matter; folks don't have (make?) the time to try new things. They're rushed by development schedules (safer to stick with an estimate for an "old" approach with which you have experience than guesstimate on something completely novel), support for legacy products (even if "legacy" is 6 months ago!), new "business practices", etc. Many developers are very comfortable sitting on their laurels (even if they don't HAVE any! :> ) than reaching out to explore new application domains and solution spaces. And, there are often external pressures (boss/client/customer/peer) trying to coerce their efforts in a certain direction (easier to just "go along" than to "make a stand"). Those folks who are more "independent" often have to worry about mouths to feed, etc. -- can't risk botching a project if it means delaying (or losing!) your income stream! Finally, I don't think many folks watch to see what's happening in the universities and other research domains -- places where folks don't typically have these same pressures (i.e., they don't have to produce a 'product' timed to a 'market' so have more leeway to experiment with new ideas without penalty). If a developer tries a new language or development strategy, he feels like he's made a HUGE effort -- compared to his peers. The idea of moving into an entirely new application domain and design approach is just too big of a leap, for most. As with the advice of measuring before optimization, they're guilty of coming to conclusions -- before even attempting the experiment! [Think about your own experience. *If* you could, would you approach your current products differently? If starting FROM SCRATCH?? Different hardware, software, feature sets, etc.? And, likely, the reason you don't make those radical changes for your *next* product is simply because it would be too big of an investment along with the psychological abandoning of your *previous* investment. Hard to do when you're busy living life, today!]
On Fri, 14 Apr 2017 10:22:28 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

>On 4/14/2017 5:33 AM, upsidedown@downunder.com wrote: >> On Fri, 14 Apr 2017 12:53:54 +0300, upsidedown@downunder.com wrote: >> >> >>> These days the hardware is so cheap that for a RT / high reliability >>> system, I recommend 40-60 % usage of CPU channels and communications >>> links. Going much higher than that, is going to cause problems sooner >>> or later. >> >> I just realized how old I am (still one of the youngest in CAE and >> especially SED newsgroups). During my career in various forms of >> computing, the prace/performance has been improved by a ratio one to >> a million, depending on how you interpret the Moore's law (is the >> price/performance ratio doubling every 18 or 24 months). With such >> huge rations, it is cost effective to do things in one way and other >> 2-4 years in a completely different way. > >I started work on "embedded" products with the i4004 -- with clock >rates in the hundreds of kilohertz and instruction execution times >measured in tens of microseconds -- for *4* bit quantities! *Simple* >operations (e.g., ADD) on "long ints" were on the order of a MILLIsecond. >Memory was measured in kiloBITS, etc.
You need to consider the inpput/output speeds. Essentially the 4004 was a calculator chip with steroids. The input speed for a manual calculator is about 100 ms/decimal digit and one expects that the result is displayed in a second, so you could do quite complicated computations even with a 1 ms (long) decimal add time. Just calculated that the 4004 would have been sufficient to handle summation of data from a slow card reader (300 CPM, cards per minute) so with ten 8 digit decimal number on each card, you would have to handle 50 long decimal numbers each second. Using a medium speed (1000 CPS characters per second) paper tape, this would be 125 long decimal integers/s, which would quite hard for the 4004 to handle. Simple decimal computers in the 1960's often used a 4 bit BCD ALU and handled decimal digits serially. This still required a lot of DTL or TTL chips and the CPU cost was still significant. With the introduction of LSI chips, the cost dropped significantly in a few years. Any programmable calculator today will outperform any 1960's decimal computer by a great margin at a very small fractional cost. If things were done in one way in the past with different constraints, implementing it today the same way might not make sense. The 4004 had a nice 4 KiB program space. Small applications even in the 1980's didn't need more and reprogramming a 4 KiB EPROM took just 5 minutes :-)
On 4/17/2017 10:24 AM, upsidedown@downunder.com wrote:
> On Fri, 14 Apr 2017 10:22:28 -0700, Don Y > <blockedofcourse@foo.invalid> wrote: > >> On 4/14/2017 5:33 AM, upsidedown@downunder.com wrote: >>> On Fri, 14 Apr 2017 12:53:54 +0300, upsidedown@downunder.com wrote: >>> >>> >>>> These days the hardware is so cheap that for a RT / high reliability >>>> system, I recommend 40-60 % usage of CPU channels and communications >>>> links. Going much higher than that, is going to cause problems sooner >>>> or later. >>> >>> I just realized how old I am (still one of the youngest in CAE and >>> especially SED newsgroups). During my career in various forms of >>> computing, the prace/performance has been improved by a ratio one to >>> a million, depending on how you interpret the Moore's law (is the >>> price/performance ratio doubling every 18 or 24 months). With such >>> huge rations, it is cost effective to do things in one way and other >>> 2-4 years in a completely different way. >> >> I started work on "embedded" products with the i4004 -- with clock >> rates in the hundreds of kilohertz and instruction execution times >> measured in tens of microseconds -- for *4* bit quantities! *Simple* >> operations (e.g., ADD) on "long ints" were on the order of a MILLIsecond. >> Memory was measured in kiloBITS, etc. > > You need to consider the inpput/output speeds. Essentially the 4004 > was a calculator chip with steroids. The input speed for a manual > calculator is about 100 ms/decimal digit and one expects that the > result is displayed in a second, so you could do quite complicated > computations even with a 1 ms (long) decimal add time.
We used it to plot current position based on real-time receipt of LORAN-C coordinates: <https://en.wikipedia.org/wiki/Loran-C> Each "coordinate axis" (i.e., X & Y, latitude & longitude, etc.) in LORAN consists of a family of hyperbolic "lines of constant time difference": <https://en.wikipedia.org/wiki/File:Crude_LORAN_diagram.svg> between a master transmitter and one of its slaves (A&B in the diagram). With families from *two* such slaves INTERSECTING, you can "uniquely" [1] determine your location on the globe (knowing the latitude and longitude of the master and associated slaves, the shape of the earth, propagation time of radio waves and "conic sections"). [1] This is a lie as a single hyperbolic curve from one family (time-difference coordinate #1) can intersect another hyperbolic curve from another family (TD coordinate #2) at *two* points, unlike a (latitude,longitude) tuple that is unique. To confirm this, print two copies of the above sample and skew them so AB is not parallel to AC (assume C is the renamed B on the second instance) Coordinates are processed at a rate of 10GRI (10 sets of transmissions -- GRI is the time between transmissions from the master; <https://en.wikipedia.org/wiki/Loran-C#LORAN_chains_.28GRIs.29>). Each is typically about 50-100ms so 10GRI being 500-1000ms. It's a fair bit of work to resolve two hyperbolae on an oblate sphere mapped to a scaled Mercator projection and drive two stepper motors to the corresponding point before the next "fix" arrives. This is the second generation (8085-based) version (bottom, center): <http://www.marineelectronicsjournal.com/Assets/lew%20and%20jim%20best%20copy.jpg> By then, the code space had soared to a whopping 12KB (at one time, close to $300 of EPROM!) -- with all of 512 bytes of RAM!!
> Just calculated that the 4004 would have been sufficient to handle > summation of data from a slow card reader (300 CPM, cards per minute) > so with ten 8 digit decimal number on each card, you would have to > handle 50 long decimal numbers each second. Using a medium speed (1000 > CPS characters per second) paper tape, this would be 125 long decimal > integers/s, which would quite hard for the 4004 to handle. > > Simple decimal computers in the 1960's often used a 4 bit BCD ALU and > handled decimal digits serially. This still required a lot of DTL or > TTL chips and the CPU cost was still significant.
The Z80 was still a 4b ALU (multiple clocks to process 8b data)
> With the introduction of LSI chips, the cost dropped significantly in > a few years. > > Any programmable calculator today will outperform any 1960's decimal > computer by a great margin at a very small fractional cost. > > If things were done in one way in the past with different constraints, > implementing it today the same way might not make sense.
Of course! I suspect I could reproduce the software for the plotters in a long weekend, now. No need to write a floating point library, multiplex PGD displays, scan keypads, drive motor coils, count *bits* of storage, etc. Just use <math.h> and a graphics library to plot line segments on a display "instantaneously". Load a set of maps from FLASH, etc.
> The 4004 had a nice 4 KiB program space. Small applications even in > the 1980's didn't need more and reprogramming a 4 KiB EPROM took just > 5 minutes :-)
You were using 1702's in the mid 70's -- 2Kb (not KB!) parts.