EmbeddedRelated.com
Forums

Is there a process for secure firmware install/upgrade for device made offshore?

Started by Unknown June 24, 2017
Hi
Recently more and more companies want to add security (authentication and/or encryption) to their devices firmware install/update process. Typically this is done by storing a secret encryption key in bootloader or elsewhere in internal MCU flash. This should work if bootloader is installed in secure facility by trusted people. But then manufacturing is outsourced/offshored and then what? I do not want to send my precious key to China. So, I wonder whether it is possible to design an algorithm or process for secure firmware installation and updates while initial firmware is installed by a factory in China. Typically my devices have JTAG, some other port (UART, etc) and often wireless (WiFi or Bluetooth). Note: moving all newly manufactured devices to a secure location and reflashing via JTAG would be too expensive. This problem seem to be very common now, there must be some common solutions, are there? If pure software solution is not possible, are there some hardware assisted solutions? I guess if a chip would include a hardcoded inaccessible private key and assymetric decryption module, this would solve this problem, would it? Are there such chips?

Thank you
On 6/23/2017 9:34 PM, jhnlmn@gmail.com wrote:
> Hi Recently more and more companies want to add security (authentication > and/or encryption) to their devices firmware install/update process. > Typically this is done by storing a secret encryption key in bootloader or > elsewhere in internal MCU flash. This should work if bootloader is installed > in secure facility by trusted people. But then manufacturing is > outsourced/offshored and then what? I do not want to send my precious key to > China. So, I wonder whether it is possible to design an algorithm or process > for secure firmware installation and updates while initial firmware is > installed by a factory in China. Typically my devices have JTAG, some other > port (UART, etc) and often wireless (WiFi or Bluetooth). Note: moving all > newly manufactured devices to a secure location and reflashing via JTAG > would be too expensive. This problem seem to be very common now, there must > be some common solutions, are there? If pure software solution is not > possible, are there some hardware assisted solutions? I guess if a chip > would include a hardcoded inaccessible private key and assymetric decryption > module, this would solve this problem, would it? Are there such chips?
What are you trying to guard/protect against? "Factory" being able to create unlicensed updates for your product? Are you expecting to distribute unencrypted (though *signed*) updates to your users? How complex is the device (i.e., you;ll be giving your manufacturer the binary image of the code to install in the virgin devices)? Reflashing via JTAG (at a "secure location") isn't the *only* option. (Hint: how will your end users install updates?) All you need (to allow you to NOT trust your manufacturing source) is a "secret" that isn't available to them, or your users.
> What are you trying to guard/protect against?
1. Reverse engineering 2. Hacking In resent years there were too many scary news about baby monitors and teddy bears, usb dongles and hard drives, networks routers and switches being hacked and used to spy on their users, steal their data or infect computers. Many of these hacks became possible because original firmware was reverse engineered and/or hacked firmware was installed. Companies that developed those insecure devices got very bad publicity. People are afraid to use devices. We need to find some best practices and follow them.
> Are you expecting to distribute unencrypted (though *signed*) updates to your users?
Requirements vary. Recently many of my clients began demanding encryption (at least those who understand the threat). And I advise them to use both authentication and encryption as much as microcontroller capabilities allow. Luckily, recently more and more microcontrollers include hardware encryption modules. The question is where to hide the secret key to use for encryption/decryption.
> How complex is the device (i.e., you;ll be giving your manufacturer the binary image of the code to install in the virgin devices)?
Again it varies, but recently I see more and more devices that only include bootloader and some basic firmware and then perform firmware upgrade upon first connection. Of course, this firmware upgrade can be done by anybody - end user or manufacturer or hacker.
> All you need (to allow you to NOT trust your manufacturing source) is a "secret" that isn't available to them, or your users.
Yes, this was my original question, how to store a secret on a device that is manufactured in China?
On 6/24/2017 3:27 PM, jhnlmn@gmail.com wrote:
>> What are you trying to guard/protect against? > > 1. Reverse engineering
If your product is "trivial", there is no need to "decompile the code"; just reinvent the design from its apparent specifications/functionality. If your product is complex, you probably won't be able to do much more than *copy* the design as it quickly becomes impractical to reverse engineer *big*/complex designs (even with knowledge of the toolchain used in their development). There are "things" you can do to confound counterfeiters (folks that nominally copy a design with minimal changes -- like changing the copyright notice to reflect "their" ownership of it, etc.). But, you always have to balance the effort (and inconvenience) of introducing these as they don't *directly* contribute to the overall functionality/reliability of your product (indeed, they may *worsen* reliability!) Moral of story: make a great product that people WANT to copy! (whether that is by cloning the implementation *or* by duplicating the functionality)
> 2. Hacking In resent years there were too many scary news about baby > monitors and teddy bears, usb dongles and hard drives, networks routers and > switches being hacked and used to spy on their users, steal their data or > infect computers. Many of these hacks became possible because original > firmware was reverse engineered and/or hacked firmware was installed. > Companies that developed those insecure devices got very bad publicity. > People are afraid to use devices. We need to find some best practices and > follow them.
The problem, there, is an attitude towards "security" as a "bolt-on" aspect of a product's design instead of an inherent, "first-class" design goal. HP made a "secure web console" (I draw your attention to the word "secure" in THEIR choice of product name!). This was essentially a one-port terminal server that you'd connect to the serial (console!) port of the system to be administered. An outward facing network interface on the device hosted a web interface that allowed "administrators" to "log in" to the device and have the sorts of capabilities afforded to "operators" sitting in the data center alongside the machine being controled (i.e., the PHYSICAL acccess that is usually essential to secure administration: control who has access to the physical machine to limit the attack surface). So, from a browser, you could type on a virtual console AS IF you were seated at a TTY a few feet from the machine. Unfortunately, the implementation relied on a simple substitution (Caesar) cipher to send the user's keystrokes from the browser to the remote SECURE web console device. Thus, when the administered machine sent it's "login: " prompt out to the secure web console, that was forwarded to the remote user seated at the web browser interface. When he typed "root" followed by "mysecretpassword", that text was sent to the SECURE web console as "sppu" and "nztfdsfuqbttxpse" (for example). Anyone sniffing packets would see this and trivially decode the plaintext to be "root" and "mysecretpassword". I.e., DESPITE the product being marketed with security in mind, the focus was more on getting the web server in the device to operate with a reasonably portable browser interface. Security ended up sacrificed to meet those other goals, instead. Having default passwords has got to be the stupidest idea that ever came to this market. REQUIRE the user to set up an initial password BEFORE allowing the device to operate. If the user forgets the password, RESET erases the password and restores the device to its "as shipped" configuration (deliberately destroying any "sensitive" information that the user may have forgotten was present on the device -- therefore, "Press RESET before disposing of this device to safeguard its contents!") rendering it unsable until a NEW password is specified. [I suspect default passwords and account names represent the biggest attack vector in most embedded devices. Even MS got smart and realized "Administrator" is a bad name for an ENABLED account!] Beyond this, you have exploits that are consequential to sloppy or poorly tested code (buffer overruns, stack overflows, etc.). Adding a secure update mechanism won't protect against these sorts of exploits. And, an exploit that can hide "in RAM" (or in some aspect of the machine's state) doesn't need to alter the "ROM image" -- it just needs to be able to persist for some amount of time (which is usually easily done with 24/7/365 devices) [Once an exploitable device is discovered, it can be targeted for periodic "reinfection" even if it happens to cycle power (or otherwise purge the exploit) often.]
>> Are you expecting to distribute unencrypted (though *signed*) updates to >> your users? > > Requirements vary. Recently many of my clients began demanding encryption > (at least those who understand the threat). And I advise them to use both > authentication and encryption as much as microcontroller capabilities > allow. Luckily, recently more and more microcontrollers include hardware > encryption modules. The question is where to hide the secret key to use for > encryption/decryption.
Are you willing to impose a particular choice of devices on your customers (i.e., all of your designs)? Or, are you happier imposing a particular *process* on them (thereby giving you more leeway in how you approach the designs)?
>> How complex is the device (i.e., you;ll be giving your manufacturer the >> binary image of the code to install in the virgin devices)? > > Again it varies, but recently I see more and more devices that only include > bootloader and some basic firmware and then perform firmware upgrade upon > first connection. Of course, this firmware upgrade can be done by anybody - > end user or manufacturer or hacker.
That's how I have approached this. I distribute a "ROM image" that allows the manufacturer (domestic or foreign) to ensure the device HARDWARE will perform as I intend. This exercises ALL of the hardware (in concert with a suitably designed test fixture) even if the "release software" doesn't (currently) use some portion thereof. I.e., if the devices pass the final inspection, I have some high degree of confidence that the manufacturer has built them properly. And, the manufacturer can likewise be reassured (no fear of me rejecting a lot due to some criteria that I impose *after* delivery). A side-effect of this is the manufacturer never sees my production code. Instead, he has code (a binary image) that is designed to test a particular piece of hardware in a particular test fixture. It may have *NO* value when used with a modified test fixture or on different hardware. [Note that there is nothing to prevent said manufacturer from counterfeiting these devices! He knows what parts they contain, how they are wired, what "software" resides in them, etc.] I receive the devices and subject them to the same "acceptance test" that the manufacturer (theoretically!) applied as his "final test". If devices pass the final test (my "acceptance test") but fail to meet some other aspect that I require of the design, then (clearly) my test procedure needs to be revised/upgraded. A "passing" device is then reprogrammed, serialized and recorded as "ready (for sale)". After the sale, the user "introduces" the device to his system at which point it is tailored to his needs, keys/certificates installed, etc. All of the "post manufacture" processes use protocols that are typically NOT routed -- so, they require the device to be present on the same subnet as the companion/programming device. Additionally, the individual initiating the action must "press a button" to activate this process. So, a hostile actor can't usually gain virtual access to the device (unless he suborns another host on that subnet and installs an appropriate set of services to mimic the legitimate "programming" ones). Also, he needs to have those in place to anticipate the "button press" AND has to hope the /bonafide/ services aren't running (as they would detect this conflict). This isn't foolproof. But, it greatly reduces the attack surface to one that SHOULD be more manageable: keep a secure "programming station" (even if that means booting off R/O media, etc.)
>> All you need (to allow you to NOT trust your manufacturing source) is a >> "secret" that isn't available to them, or your users. > > Yes, this was my original question, how to store a secret on a device that > is manufactured in China?
Add it after the manufacturing process is complete. If it isn't a "shared" secret common to ALL of your devices, then having one instance of it exposed (accidentally, social engineering, etc.) doesn't compromise ALL of your products (of that particular model).
> If your product is "trivial"
Personally I have no chance of working on "trivial" devices since they are developed entirely in China. Of course, it will be nice if makers of those "trivial" devices (like those USB dongles that got hacked few years ago) will follow good secure processes as well.
> make a great product that people WANT to copy!
I am not sure my clients want that. I heard of many innovating companies that died as soon as chineese outfits managed to copy their design. Also defense against reverse engineering is needed not only to prevent copying, but to protect some other secrets, for example, an encryption or authentication key or some secret method that give them business advantage (detection of counterfeit printer cartridges comes to mind).
> REQUIRE the user to set up an initial password BEFORE allowing the device to operate.
And how can we securely send this password? It is easy to send password to a Web site (using HTTPs and private key on the server), but a lot of devices allow local connection between devices or between a device and PC or smartphone. And here we come back to the same original question: how can we securely store a private key on a device?
> you have exploits that are consequential to sloppy or poorly tested code (buffer overruns, stack overflows, etc.). > Adding a secure update mechanism won't protect against these sorts of exploits.
Yes, I know that security through obscurity is a bad security, still it is much easier to find code susceptible to buffer overruns by reading disassembly than just by black box testing.
> Are you willing to impose a particular choice of devices on your customers (i.e., all of your designs)? > Or, are you happier imposing a particular *process* on them (thereby giving you more leeway in how you approach
the designs)? No, I cannot force them to use a different microcontroller only because of security (we are not making ATMs machines). Usually I am writing the code and writing a doc on how to install it. If a device is low volume and per-unit cost is not so important I can require that bootloader (along with secret key) along with JTAG programmer is only installed on more or less secure PC on company premises. But for more or less high volume devices they want to do the whole process in China.
> A "passing" device is then reprogrammed, serialized and recorded as "ready (for sale)".
This is what I called "moving all newly manufactured devices to a secure location and reflashing via JTAG " in my original question (replace JTAG by WiFi, BT, Serial, does not matter). Alas, many startups have no resources to do that. They want to outsource everything.
On 6/24/2017 7:26 PM, jhnlmn@gmail.com wrote:
>> A "passing" device is then reprogrammed, serialized and recorded as "ready >> (for sale)". > > This is what I called "moving all newly manufactured devices to a secure > location and reflashing via JTAG " in my original question (replace JTAG by > WiFi, BT, Serial, does not matter). Alas, many startups have no resources > to do that. They want to outsource everything.
Then my solution won't work for you. Having worked in industries where counterfeiting was BLATANT and very costly (~25% of your market share, thousands to tens of thousands of dollars per lost sale), I can tell you that you will find yourself spending a fair bit of time chasing dragons -- and, unless you can grow WINGS, they'll always be a step ahead of you! [Unless, of course, you're designing products that NO ONE WANTS!] You'll have PROOF of a supplier having explicitly copied your product. By the time you get yourself in front of a judge to slap an injunction on them or have their cargo seized at the local port (assuming you know where it is!), they'll have folded up shop and restarted under the name "Joe's Garage Shop #x" -- where <x> increases by one each iteration. *YOU* will be devoting your energies to trying to thwart and detect their infringements while they'll be diverting their energies into learning how to copy your designs FASTER. You outsource "everything" then you also have implicitly outsourced your product security/integrity/quality -- and with it, your reputation. Good luck!
Don, I appreciate your pessimism and despair.
Still, imagine that more and more devices are made just like I said: on a small budged, with almost everything outsourced except (hopefully) engineering. So, we must find some ways to retain control, otherwise everything will be hacked (which is not hacked already) and we will not be able to make or use any electronics. I really wish a solution will be found. Anybody?
Thank you
On 6/24/2017 9:59 PM, jhnlmn@gmail.com wrote:
> Don, I appreciate your pessimism and despair.
It's not pessimism; it's a realistic attitude given the capabilities available to even "small players", nowadays. "Kids" de-encapsulate chips in college labs and microprobe die, etc.
> Still, imagine that more and more devices are made just like I said: on a > small budged, with almost everything outsourced except (hopefully) > engineering.
Imagine you have a device that lets you "hide" a key in it. Who's going to push the buttons, copy the files, or <whatever> to actually *hide* the key? You're outsourcing EVERYTHING! Do you trust the supplier to do the "secret key encoding" -- but NOT trust him to actually *see* the key he is encoding into the devices? Do you find a second firm to do the keying -- and hope these two firms never "compare notes"? What if you add a component containing the key to the design AFTER it has been manufactured. Who "adds the component"? A third firm? Do you buy custom silicon with the key already hidden inside? Then, hope that secret is never leaked/revealed/sold/etc. (i.e., have BETTER security over your IP than the likes of the NSA, Sony, many multinational banks, etc.) by a disgruntled employee, etc.? And, WHEN it is disclosed/discovered (by some technique that you've not yet heard about), do you get NEW silicon with a different key? And, at the same time, allow clients to be able to PICK the microcontroller core that they want embedded in that silicon? Do you suddenly stop making products when someone "cracks" your system? And, not resume until you've come up with another "uncrackable" approach? And, of course, you no doubt expect all of this to be inexpensive as these "startups" don't have the re$ource$ for that, either? You either have to trust *some* supplier OR be willing to take on some of the trusted activities yourself. How does your client know that *YOU* can be trusted? Maybe you hid a backdoor in THEIR product?!
> So, we must find some ways to retain control, otherwise > everything will be hacked (which is not hacked already) and we will not be > able to make or use any electronics. I really wish a solution will be found.
Invest your time in making better products -- so no one wants the last generation product. Design your products with security in mind from the start, not bolted on as an afterthought. Be an Engineer, not a Rent-a-Cop. IFind people that you trust -- if you need them to be trustworthy -- in much the same way that you hire people to be *competent* (at some skill) when THAT is your need!
> Anybody? Thank you
Add a Dallas chip with a unique id in every product?

On 6/25/2017 2:37 AM, Bill Davy wrote:
> Add a Dallas chip with a unique id in every product?
That just gives you a serial number. Do you plan on releasing software updates (images) *tied* to specific serial numbers? I.e., what is there that causes S/N 12345 to work but not 54321? [I.e., you can use the device's MAC for a S/N] What's to stop someone from downloading images for two different serial numbers (perhaps because he legitimately OWNS two devices each with different S/N's) and comparing the images to isolate the location of the serial number and any other diff's (e.g., CRC)? Armed with that, how hard would it be to create a NEW image for some other S/N? (Or, elide the S/N check entirely?)