Hardware secure elements such as Microchip’s ATECC508A/ATECC608A, STMicroelectronics’s STSAFE-A100, Infineon’s Optiga or NXP’s A71CH are promoted as solutions for IoT security - but what do these devices do, how do they help and what are their limitations?
These devices are also known as embedded hardware security modules (HSMs), cryptographic authentication chips and cryptographic processors.
The need for hardware security
For a secure IoT device, you need to include the following security properties (there are others!):
- To know that the software you are running is not tampered with ("integrity");
- To be sure that you can trust who you are communicating with ("authenticity");
- To be able to communicate privately ("confidentiality").
There are well-known solutions to address each of these, for example:
- Integrity : secure boot, protected debug ports, Memory Protection Units, secure code enclaves;
- Authenticity: “public key” cryptography such as RSA or Elliptic Curve algorithms that are used in TLS handshakes;
- Confidentiality: “symmetric key” cryptography such as AES, which is commonly used to encrypt TLS messages.
For modern communications (and preferably for secure boot), cryptography is necessary. Unfortunately cryptography and IoT are not a good fit:
- Cryptography is computationally expensive (especially public key cryptography);
- IoT devices are often low power devices with slow processors, resulting in cryptographic operations taking a long time;
- Cryptography requires secrets to be stored in a safe and non-volatile manner (in the case of RSA at least 2k bits of storage is recommended).
To address these issues, it is desirable to move cryptographic operations from software running on insecure platforms to secure hardware, and add cryptographic accelerators into IoT devices. This allows the cryptographic algorithms to be performed with lower overall power requirements compared to a software solution, and to operate faster, as these operations can be accelerated on custom hardware. It also allows for a higher level of hardware security to be applied to the most security-sensitive parts of the system.
The down-sides to adding these accelerators are:
- They are complex to design, implement and test;
- They are relatively big electronically and the increased silicon area required pushes up the cost of chips;
- They are fixed to a particular algorithm (e.g. AES) so each supported algorithm requires another chunk of silicon area, which can quickly become expensive and potentially obsolete.
Symmetric key cryptography (such as the AES algorithm) accelerators are now relatively common in microcontroller devices with communications capabilities (e.g. BLE, WiFi), as these communications protocols now mandate the inclusion of symmetric key cryptography to protect data sent over the links, and doing this in software would be impractical for performance and power reasons. However, public key hardware acceleration is still less common as it is more complex and requires more silicon area to implement (and therefore increases the cost of devices).
Be aware, though, that cryptographic accelerators are not all made equal - even if they accelerate the same cryptographic algorithm such as AES. Different implementations can have vastly different levels of security.
So why is this? We all assume hardware to be secure, don’t we? Well, unfortunately that’s not necessarily the case!
Cryptographic algorithms implemented in software or hardware can leak information. For example, if my hardware performs a multiply operation when a digital bit in my secret data (the “key”) is a 1, but doesn’t perform a multiply if my bit is a 0 and I can measure the instantaneous power that the hardware is drawing then I can work back from power measurements to recover the secret key.
But is this possible?
Unfortunately YES! You can buy hardware at relatively low cost to do just that. For example, see the excellent chipwhisperer tool which can, at a low cost, extract the secrets from many current “secure” IoT processors.
These type of attacks are called “side channel attacks” and are very real. IoT devices are often physically accessible and the skills to perform these types of attacks are becoming more common.
There are also other more advanced, but less common, physical attacks. The hardware implementation has to be resistant to all of these in order to claim a high security level.
The need for secure key storage
Cryptographic processing in hardware only solves one part of the cryptographic problem. The other problems are due to the secret data that you need to secure:
- Secret key data (either symmetric cryptography keys or public key cryptography private certificates) needs to be safely stored;
- Secret data needs to be safely used (see the “side channel attack” explanation above);
- You need to add those keys to your IoT devices securely in the first place at manufacturing time.
The secret keys are one of the most important things that needs to be protected on an IoT device. If an attacker gains access to them they could impersonate a device and potentially get access to your network or that of the device owner; or they could use it to steal your intellectual property or the legally protected personal data of the device owner.
Unless a device has dedicated secure storage (and some more recent microcontrollers and SoCs do claim to have secure areas of flash or OTP for this), these secrets are at risk and so are you as the manufacturer or operator.
Getting keys onto devices is also a difficult problem as you need to protect their secrecy from the point at which you generate the data to the point where they are actually stored on the devices. This normally requires secure key generation facilities, secure transport of key material to manufacturing sites and a secure facility on the manufacturing line to allow the keys to be safely programmed. This all adds up to a lot of additional expense.
How do secure elements help?
Secure elements for IoT security typically provide:
- Secure hardware for performing public key cryptography operations;
- Secure key storage;
- Manufacturer-provided secret data programming (“secure provisioning”) for large-volume customers.
This cryptographic and key storage hardware is high quality. You will often see statements such as “EAL5+ certified” and this indicates that the hardware has been tested to be resistant to side channel attacks and other passive advanced attacks. They also likely to have active hardware protection, such as “Active Shields”, against active physical attacks on the hardware, and have been tested against attacks such as fault injection and even as far as physical probing of the silicon die.
So these devices provide confidence that the public key cryptography and key storage are secure, and reduce the risk of your secret keys and private certificates being exposed. The manufacturer-provided provisioning can also save time and money when first moving a product line to be secure.
You can think of a secure element as being like a secure room containing a computer and storage that has very good physical security. All the information that you store in the room can be considered to be safe from attackers and any computational operations that you perform on the computer in that room with the stored information is also safe from snooping while you’re working. The secure provisioning is then like the manufacturer already having put some information safely into the room before even you get access to it.
What are their security limitations?
When you are actually using the secure room you still have to carry data into the room to perform the computations, and you have to carry the results out. Once you leave the room you’re not as well protected. An attacker could also use your secure room by pretending to be you and therefore sending data into it and get the results back out.
In real terms, it is the bus (normally I2C) that links your processor to your secure element that forms the weakest link. An attacker can probe this bus with very low cost tools and read any data sent over it. It is possible to improve the security of this link by encrypting or obfuscating the data sent over it, and by authenticating with the secure element (e.g. by using a software library on the processor) so it can be trusted. However, the security level of the link will always be that of the weakest security component, which will be the chip that the secure element is connected to (the “host processor”).
That is not to say that the secure elements do not provide value:
- Your secret keys are protected: they never leave the element and an attacker won’t be able to get them;
- The results of operations using secret keys are typically used for transient operations. For example, they could be used to generate a “session key” that is used to encrypt the communications during a single TLS session. Even if an attacker extracts this they can only intercept or inject data for this session;
- If an attacker does manage to take over use of a single secure element it should be easy to stop them if you detect them as each device should have unique keys and a unique digital identifier.
What are their practical limitiations?
Using a secure element requires device driver software on the host processor, and integration of the capabilities of the secure element into exisiting software applications. For example, to use a secure element for securing TLS connections requires changing the TLS software stack to call the secure element device driver at the appropriate points to perform operations such as Diffie-Hellman key exchange.
The driver software and application stack support from secure element manufacturers can be of varying quality and may not support the latest versions of common communications stacks.
What should they be used for?
We would recommend that secure elements are used for:
- protecting secret public key cryptography private certificates for communications (e.g. a TLS private certificate for an IoT device used to authenticate it on the network);
- generating communication protocol session keys (preferably with bus link protection).
If the secure element offers a method to authenticate it with the host processor, then this should be used. Likewise, any link protection measures should be used.
The data from secure elements should only be trusted to the extent that you trust the host processor and its security implementation of the link protection and authentication security.
Secret keys should always be unique to each secure element to allow devices to be blacklisted from your network.
We do not recommend that secure elements are used for:
- Secure boot integrity checking of the host processor software;
- Random number generation unless there is no other practical option.
The two-chip solution is not ideal and in the future we would expect secure element functionality to be integrated into the host processor.
How can we help?
If you would like help with device selection, evaluating the use of a secure element in your product or need help with implementing host processor security software, then please use the web contact form or email us for more information.