EmbeddedRelated.com
Forums

Is there a process for secure firmware install/upgrade for device made offshore?

Started by Unknown June 24, 2017
On 6/25/2017 4:46 PM, jhnlmn@gmail.com wrote:
> Don wrote: >> By contrast, I can't even tell you what the private keys for my devices >> are. *They* generate them and tell me what their PUBLIC keys are. > > Who are *They*?
"They" are the devices themselves. Rather than have some entity OUTSIDE the device create the public and private keys and then FEED THEM into the devices -- and, then take pains to ensure that all traces of the private key is removed from that outside entity's memory, disk space, etc. AND that there have been no "unseen eyes" watching the process (i.e., that the key generation has been secure) -- the *devices* generate their own keys and inform the "outside world" of their *public* keys. There is never any need for the private key to be exposed outside the device. No "table of private keys" to maintain in the device that talks to them. And, if someone steals a device after it has created its key pair, there is no way for the thief to discern the correct PUBLIC key with which to talk to the device (unless he wants to generate a new key-pair -- which means the device is no longer "paired" to the original system... that system can KNOW that the device has been compromised/hacked.
> So, Don, your solution is to generate a unique private key > for each device, find some trusted *They* to install them and then generate > a unique firmware upgrade for each device using a unique public key. Did I > understand you correctly?
I use the keys as certificates to ensure *all* communications with the devices are encrypted. So, I can feed slushware to the devices at any time without worry that any other device monitoring the communications (a promiscuous interface, wireless traffic, traffic inside the switch, etc.) can see ANYTHING. And, that an adversary can't *inject* traffic that the device could/would act on -- because the adversary won't have the proper certificates to make sense of the "conversation". As you realized: "Companies that developed those insecure devices got very bad publicity. People are afraid to use devices. We need to find some best practices and follow them." I've long ago embraced that truth and taken every step that I could to close down every attack vector that I can imagine. E.g., you can take a tesla coil to one of my "exposed" network ports and fry the port. But, you won't corrupt the *switch* into which it feeds! [Am I *sure* that I've addressed all of them? No. But, I haven't copped out and rationalized "What are the odds that someone would take a TESLA COIL to a network drop just to crash the network switch??"] As I said: "Then my solution won't work for you." In my application, the devices are free to interact with the host that provides the updates. It's not a "file that you download and push to your device(s)" but, rather, something that the *device* fetches from its server (which, in turn, fetches updates from the "manufacturer's web site") This allows me to avoid having "one key fits all locks" and the exposure that represents for the product *line*. If you steal my car keys, you have access to *my* car; not all of the other cars LIKE IT!
> Dave wrote: "MCU manufacturers will typically install firmware for you. " > Well, this is obviously impractical in many cases. Big chip makers do not > talk to small startups like us. Often we just buy chips from a reseller. > What I wonder is whether MCU makers can install private keys during initial > manufacturing and then publish public keys. If a unique key per each device > is not practical, then, may be, one key per 1000 or something like that.
For devices with "factory assigned MACs", you can create an algorithm through which you derive a "secret" INSIDE the device to create a device-specific key. But, then you risk the algorithm becoming common knowledge. I.e., in that case, this gives you all the tediousness of having to keep an upgrade-per-serial-number with none of the security that it is intended to afford (because the attacker can synthesize the correct key given your S/N -- unless the S/N isn't electronically visible)
> Then they can implement a decryption module using this hard coded key. I see > that 10 years ago there were patents for encrypted JTAG. Do you know whether > anybody tried implementing it?
There have been innumerable attempts at providing "secure" computing environments over the decades. Unless you have really really deep pockets (like stationing an armed guard over the I/O terminal for your system), there are almost always exploits. None of my "work computers" can access (or be accessed by) anything outside these four walls. You won't start poking around the network hoping to find "source code for product ABC", etc. It is intentionally difficult for information to get in or out of my systems without me as an intermediary (because that's another potential attack vector). If you come for a visit and want to "use my internet connection", you won't even *see* any of my "work hosts". I may trust you, personally, but can I expect your laptop to be uninfected?? [Did I mention that I take security in my designs pretty seriously? :> ] What you have to realize is that the folks who are out to copy/hack these devices are *motivated* to do so. Whether its a hobbyist who sees it as a personal challenge (or just a curiosity) *or* the "professional" who does it to avoid the high product development costs that would otherwise attach to a virgin design effort, these folks are AS interested in cracking your design as you are in protecting it! And, often they have far more to gain than you have to lose! You may lose half your sales. But, they gain ALL of their sales! If you've never been on a Red/Blue team or actively tried to hack a product, you probably haven't even considered how you'd go about it. People are quick to dismiss the effort/cost as "too high" or "too difficult" for anyone to realistically attempt such a feat. In reality, its not that hard. And, if you're in the business of doing it (i.e., have DONE it already), you've probably streamlined your "hacking process" to the point where it's almost effortless! Have a look at: <http://www.break-ic.com/manufacturers_list_mikatech_reverse_engineer.htm> just to get a quick idea of how pervasive this "industry" is. And, that's not even poking around in the "gray markets"! Depending on the product, you can get firms to "expose" the "locked contents" for as little as $1K. How much DESIGN time/cost does that *save* the thief?? Make sure you understand the capabilities of your adversary before you try to defend against him. Otherwise, you spend time and money (and possibly unnecessarily complicate the design/build) for dubious results.
> the *devices* generate their own keys and inform the "outside world" of their *public* keys.
This cannot work unless you have a trusted person installing this initial key generation code to devices. I already explained why this is difficult. May be this can work if chip makers will build this algo into silicon. Yes, we all know that silicon can be tampered with at the fab, but let's discuss that elsewhere.
> What you have to realize is that the folks who are out to copy/hack these devices are *motivated* to do so
Please, do not repeat the obvious again and again. I kind do you like the implication of your speech that the problem is so complex and adversary is so scary that I should give up and do nothing. I did not promised to do miracles, but I promised to follow the best practices of our industry. And in this post I am trying to find a consensus about these best practices.
On 6/25/2017 6:57 PM, jhnlmn@gmail.com wrote:
>> the *devices* generate their own keys and inform the "outside world" of >> their *public* keys. > > This cannot work unless you have a trusted person installing this initial > key generation code to devices. I already explained why this is difficult. > May be this can work if chip makers will build this algo into silicon. Yes, > we all know that silicon can be tampered with at the fab, but let's discuss > that elsewhere.
My system pushes the key generation code into the devices. "From the factory", the devices only have diagnostic firmware installed. (I already explained this upthread) The (end) user connects the device to my system through a protected (physically secured) network connection so there are no "third parties" around to see the transaction. Code can only be installed when the magic button is pressed -- so a remote adversary needs a "local accomplice" to subvert the process/device (would you invite someone you don't trust to wander around a secured part of your facility?).
>> What you have to realize is that the folks who are out to copy/hack these >> devices are *motivated* to do so > > Please, do not repeat the obvious again and again. I kind do you like the > implication of your speech that the problem is so complex and adversary is > so scary that I should give up and do nothing.
I didn't say that. What I said was not to underestimate the adversary by thinking that you can BOLT ON security ("Hey, I'll use a secure boot loader and that will guarantee that my device can never be pwned!"). I was invited to bid on a project for a company that made "locks" (as in door locks, etc.). I was given a tour of their facility followed by a brief exposure to their prototype system (just entering initial production). They had their fancy "key making" workstation set up along with some toy doors outfitted with prototype lock mechanisms. My host demonstrated how a user (e.g., hotel manager) could make a key for: - a newly arrived guest (access to a room and certain common facilities) - a new housekeeper (access to a set of rooms and certain supply rooms) - a new maintenance man (a larger set of rooms and different supply areas) ... - "god" (grand master: all locks bow down before me) I asked if I could play with the system -- after that skimpy 5 minute demo. I hadn't seen any of the source code for the workstation. Nor any of the code running in the actual door locks. Haven't seen any schematics. And, proceeded to make several "grand master keys" without the system having any record that they were created, issued or *who* was involved. [Recall, this system is designed to enforce PHYSICAL SECURITY on large commercial properties! It could be an office building, hotel, bank, etc.]
> I did not promised to do > miracles, but I promised to follow the best practices of our industry. And > in this post I am trying to find a consensus about these best practices.
The best practices are to see devices get pwned over and over again because folks treat security as an afterthought. The only way NOT to have a problem is to make something that no one wants! But, hey, don't take my word for it. Just let us know when you win the Nobel -- or, when your product gets pwned! (I know where *my* money is!) Good luck. I'm out!
> My system pushes the key generation code into the devices. > "From the factory", the devices only have diagnostic firmware installed. (I already explained this upthread) > The (end) user connects the device to my system through a protected (physically secured) > network connection so there are no "third parties" around to see the transaction.
Who installed this initial networking code to your device? Factory? End user? Or some other trusted person? Do you just send JTAG programmer to the end user as ask to install FW himself?
> The best practices are to see devices get pwned over and over again > because folks treat security as an afterthought.
I am not treating it as afterthought. But all I am hearing here is that it cannot be done.
On 2017-06-24 jhnlmn@gmail.com wrote in comp.arch.embedded:

>If pure software solution is not possible, are there some hardware assisted >solutions? I guess if a chip would include a hardcoded inaccessible private >key and assymetric decryption module, this would solve this problem, would >it? Are there such chips?
There are chips designed for this purpose. I did some searching for a product a while ago and found some chips that claim secure key storage or other secure services. I did not use them (yet), but here are some examples of product families: https://www.maximintegrated.com/en/products/digital/embedded-security/deepcover.html http://www.atmel.com/products/security-ics/default.aspx http://www.fujitsu.com/us/products/devices/semiconductor/memory/fram/lineup/authentication/ -- Stef (remove caps, dashes and .invalid from e-mail address to reply by mail) "Probably the best operating system in the world is the [operating system] made for the PDP-11 by Bell Laboratories." - Ted Nelson, October 1977
On 26/06/17 01:46, jhnlmn@gmail.com wrote:
> Don wrote: >> By contrast, I can't even tell you what the private keys for my >> devices are. *They* generate them and tell me what their PUBLIC >> keys are. > > Who are *They*? So, Don, your solution is to generate a unique > private key for each device, find some trusted *They* to install them > and then generate a unique firmware upgrade for each device using a > unique public key. Did I understand you correctly? > > Dave wrote: "MCU manufacturers will typically install firmware for > you. " Well, this is obviously impractical in many cases. Big chip > makers do not talk to small startups like us. Often we just buy chips > from a reseller. What I wonder is whether MCU makers can install > private keys during initial manufacturing and then publish public > keys. If a unique key per each device is not practical, then, may be, > one key per 1000 or something like that. Then they can implement a > decryption module using this hard coded key. I see that 10 years ago > there were patents for encrypted JTAG. Do you know whether anybody > tried implementing it? >
I haven't been following this thread very accurately (Don Y has lots of experience and smart ideas, but conciseness is not his forte). But there is a step between "MCU manufacturers installing firmware" and "board manufacturer installing firmware". You can pre-program the MCU's yourself before they are mounted. If you don't have the right equipment (few people do, unless they are a big manufacturer), your distributor will do it for you for a small fee. Big distributors like Arrow can pre-program practically any microcontroller, including appropriate security bits, and re-package them in trays, tubes, reels, etc. for mounting. Somewhere along the line you have to trust /somebody/ - but you would be trusting your distributor rather than the end manufacturer.
>There are chips designed for this purpose.
Max seem to provide complete MCUs with Secure Boot Loader with Public Key Authentication. But all the details are under NDA, so I cannot tell whether it makes sense. Did anybody studied those? Atmel and Fujitsu appears to only provide peripheral I2C/SPI chips with Crypto-Authentication. But I cannot imagine how this can work if FW on the main MPU is not trusted. Am I wrong?
> You can pre-program the MCU's yourself before they are mounted
I guess there is no real difference whether we (or some other trusted party) reprogram MCUs before or after assembly. Is there? Also, there was some disagreement about what kind or keys to pre-program: symmetric or private, single key for all devices or unique key per device. Any opinion on this?
Op Sat, 24 Jun 2017 06:34:42 +0200 schreef <jhnlmn@gmail.com>:
> Hi > Recently more and more companies want to add security (authentication > and/or encryption) to their devices firmware install/update process. > Typically this is done by storing a secret encryption key in bootloader > or elsewhere in internal MCU flash. This should work if bootloader is > installed in secure facility by trusted people. But then manufacturing > is outsourced/offshored and then what? I do not want to send my precious > key to China. So, I wonder whether it is possible to design an algorithm > or process for secure firmware installation and updates while initial > firmware is installed by a factory in China. Typically my devices have > JTAG, some other port (UART, etc) and often wireless (WiFi or > Bluetooth). Note: moving all newly manufactured devices to a secure > location and reflashing via JTAG would be too expensive. This problem > seem to be very common now, there must be some common solutions, are > there? If pure software solution is not possible, are there some > hardware assisted solutions? I guess if a chip would include a hardcoded > inaccessible private key and assymetric decryption module, this would > solve this problem, would it? Are there such chips? > > Thank you
https://www.segger.com/products/production/flasher/models/flasher-secure/ -- (Remove the obvious prefix to reply privately.) Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
On Thursday, June 29, 2017 at 10:11:19 AM UTC-4, Boudewijn Dijkstra wrote:
> https://www.segger.com/products/production/flasher/models/flasher-secure/
Would it not be easy to build a hardware gadget that logs the programming bit-stream at the target side of this programmer?
On Thursday, June 29, 2017 at 10:44:48 AM UTC-7, Dave Nadler wrote:
> On Thursday, June 29, 2017 at 10:11:19 AM UTC-4, Boudewijn Dijkstra wrote: > > https://www.segger.com/products/production/flasher/models/flasher-secure/ > > Would it not be easy to build a hardware gadget that logs the programming > bit-stream at the target side of this programmer?
The details about this "Flasher secure" are very scarce, Flasher manual does not mention it at all. This seem to be the most detailed explanation: http://www.embedded.com/electronics-blogs/max-unleashed-and-unfettered/4458187/Secure-the-off-site-production-programming-of-your-embedded-products It appears that they do nothing to prevent JTAG sniffing. And they cannot, theoretically, do anything without some support from the chip maker (decryption on the chip itself). What is interesting is this "Flasher SECURE reads the UID from the device". How? Which chips are supported? Is it a standard feature?