iPhone5S: Inside the Secure Enclave
As you surely know by now, Apple has announced its iPhone 5S would include a fingerprint sensor. In a first blog post, we discussed its use as a second form of authentication, and in another post, the fact the sensor scans sub-epidermally.
Apple tells us "All fingerprint information is encrypted and stored securely in the Secure Enclave inside the A7 chip on the iPhone 5s; it's never stored on Apple servers or backed up to iCloud." So, how does the 'Secure Enclave' work, and are we sure our fingerprint will remain locked up in the A7 processor?
ARM's TrustZone technology
Actually, the Secure Enclave is no more no less than ARM's TrustZone technology. TrustZone adds a new mode to ARM processors: the Secure Monitor mode. For readers familiar to x86 rings, the User mode of ARM processors is comparable to ring 3, the Supervisor mode (used by the OS kernel) to ring 0, and the Secure Monitor mode would be like a ring "-1". This mode is entered from privileged processor modes when the processor receives a Secure Monitor Call instruction. In such a case, the processor switches to the Secure Monitor mode and sets a specific bit (NS) to 0 (secure) or 1 (not secure). This bit is held within a specific register of the processor: CP15, and it is used to provide different accesses to hardware: access to secure peripherals or not, access to non volatile keys, access to secure portions of the on-chip RAM or not etc. For instance, data written to the RAM when in Secure Monitor mode cannot be accessed when in User mode. It is like having two virtual RAM chips.
TrustZone's hardware capability is piloted at the software layer by a special two-world architecture. On one side there is the "normal" world, running a rich OS - typically iOS 7 in our case. That world runs your favorite iOS applications (Angry Birds, Facebook, Twitter etc) and also system services such as the one that locks the iPhone (see GSEventLockDevice() in the GraphicsServices.framework). On the other side, there is the Secure World, which features a different, secure, microkernel and security services. Only few code runs in that area: trusted code which is limited to a strict set of functionalities.
How it possibly works
Apple has not released its design specifications, so we'll try and guess how it works. Please look at the figure while reading the text below.
When the end-user wants to lock/unlock his iPhone 5S, the locking service needs to access the fingerprinting sensor. This sensor is only accessible from the Secure World, so at some point the locking service calls a function of the TZAPI (TrustZone API) which issues a Secure Monitor Call (SMC) instruction. The processor switches to the Secure Monitor Mode. The switching of worlds is handled by the Secure Monitor. The call ends up in the Secure Fingerprinting service. It accesses the fingerprint sensor to capture your fingerprint. Access to the driver is possible as we are in the Secure World. Note that even a rooted iOS 7 cannot access the driver as it does not operate in the Secure Monitor Mode. The bits which characterize your fingerprint go from the sensor to the processor. The bus between the sensor and the process is secured too: the data that flows on that bus cannot be eavesdropped or modified by a malicious app of the Normal World because normal world apps run in User mode, whereas the process is in Secure Monitor mode. However this is a protection against software attacks, not hardware attacks: if you strip the Cortex A7 and get a powerful microscope to inspect the die, you'll see everything of course.
So, the digital fingerprint is collected. It is probably salted with device specific information and hashed - using a hash function provided by the cryptographic library available from the Secure World - and the result is compared with the expected fingerprint. The expected fingerprint is likely to be stored in some onboard RAM of the chip, or in an encrypted form on the SD card. It can probably be erased or overwritten (to initialize the device for a first use), but cannot be modified either because the data, or the key to decrypt it, is tagged as secure.
Note that TrustZone would be used very much the same with a password, a PIN or facial recognition.
Is this making authentication safe?
As always, there are no full guarantees with security. Much of it is about trusting Apple.
Design. The Secure Fingerprinting service might export a function named "getFingerprint()", which is meant to retrieve the fingerprint on the SoC (System on Chip) and communicate it to a requesting Normal World instance. Other attack: an attacker might trigger the initialization phase to reset the fingerprint. He/she won't retrieve the stored fingerprint, but it will defeat authentication nonetheless.
Implementation of TrustZone. Even good developers make errors, and it is unfortunately nearly certain vulnerabilities will be found in the TZ software framework. A vulnerability in the TZ driver would be particularly critical as attackers would gain access to the Secure World.
Implementation of cryptography. Vulnerabilities or bad usage of cryptography are another path for attacks. In particular, good random number generators are difficult to design and their imperfection can cause the downfall of crypto. Or imagine timing attacks on the encryption of a fingerprint leaks data.
Biometric sensor. It's good news the sensor is not optical but a capacitance scanner. However, it may still be exploited. Rahul Sasi explains such sensors can usually be defeated by re-creating a 3D print of the finger (get the print on some cello tape and apply fevicol) and glueing it on your own. Also, Richard Henderson mentions there might be a slight difference between a sub-dermal sensor and a sub-epidermic one. Finally, are there cases where the iPhone falls back to non-biometric authentication (it's winter you have your gloves on, you've got a band-aid on your finger etc)? If so, that might be exploited by attackers to downgrade authentication.
Hardware. We've already mentioned the fact: TrustZone is not designed to be secure against hardware attacks, only software attacks. So, if an educated thief steals your phone, he/she can retrieve your fingerprint.
Nice gadget, but is it worth the risks?
People usually don't communicate their fingerprints to third parties. Our fingerprints are in biometric passports, so they are known to our own governments, but that's usually about all. With Apple's Touch ID, aren't we making it easier for cybercriminals to get our fingerprints (and re-sell them on the black market for whatever nefarious intent)? Additionally, our fingerprints are not replaceable: once they have been compromised, there is no way back, it's not like a key pair, we can't just generate a new one...
Thanks to Guillaume Lovet for his careful review and Richard Henderson, Michael Perna and ... my husband ;) for their insights on the topic.
-- the Crypto Girl