In an effort to combat the COVID-19 crisis, Apple and Google have partnered to implement a way to leverage smartphones for determining whether or not individuals have come in contact with infected patients that they claim preserves user privacy. The techique is called contact tracing and the full list of specifications for the algorithms is in an Apple press release here. Here’s a high-level summary taken directly from the FAQ:
The solution harness[es] the power of Bluetooth technology to aid in exposure notification. Once enabled, users’ devices will regularly send out a beacon via Bluetooth that includes a privacy-preserving identifier—basically, a string of random numbers that aren’t tied to a user’s identity and change every 10-20 minutes for additional protection. Other phones will be listening for these beacons and broadcasting theirs as well. When each phone receives another beacon, it will record and securely store that beacon on the device.
At least once per day, the system will download a list of beacons that have been verified as belonging to people confirmed as positive for COVID-19 from the relevant public health authority. Each device will check the list of beacons it has recorded against the list downloaded from the server. If there is a match between the beacons stored on the device and the positive diagnosis list, the user may be notified and advised on steps to take next.
Why Should We Care?
Never waste a good crisis.
– Winston Churchill
When a head of state advises something like this—a mantra followed by governments, businesses, and yes, tech companies—it’s not unreasonable to view initiatives by big, data-hungry players like Google to provide “safety” to the general populous through a paranoid lens.
A person’s health is an incredibly personal, private matter. It’s as simple as that and I shouldn’t need to say more. Do you want your health insurance premiums going up in the future because of this data point? Do you really want an advertiser knowing whether or not you contracted the novel coronavirus? Even further still, do you want watchful eyes to have the power to track where you’ve been and who you’ve interacted with during a lockdown?
The crux of the issue is that even though tracking infected patients is a useful tool for both modeling the pandemic and keeping people safe, it needs to be done in a way that doesn’t enable data harvesting or power grabs by unrelated parties.
Protocol Analysis
This section will analyze the efficacy of the proposed Bluetooth and cryptographic protocols through an ultra-conservative lens to determine how “private” their proposed tracing method really is; feel free to follow along with the PDF! I’m going to try to avoid a discussion of the cryptographic merits of the protocol (smarter folk than I have looked at things from that angle already) and focus on privacy.
It’s worth noting that these are just protocols; implementations are delegated to the local public health authorities, and we all know writing secure code is much harder than just musing about it.
Cryptographic Primitives
Let’s start with the basics. The protocol only needs symmetric encryption provided by the following primitives:
- An
HKDF
(HMAC-based key derivation function) using SHA-256. ✅ AES
using 128-bit keys and in CTR mode. ✅- A cryptographically-secure psuedorandom number generator (
CSPRNG
). ✅
The HKDF is well-known, standardized, and considered secure; both the key length and mode of operation for AES are considered secure against chosen-plaintext attacks; and the CSPRNG is secure by definition.
So far, so good.
Key Management
Let’s look at how these keys are used. The final data payload that will be transmitted to other users contains two key pieces of information:
- the rolling proximity identifier (RPI), which is a random secret value, and
- the associated encrypted metadata, which is unspecified, though it must be “non-user identifying.”
The key management schedule discretizes time into 10-minute intervals since the UNIX epoch. This called the current “interval number.” For example, the 5th interval is the 10-minute period starting 50 minutes into January 1st, 1970.
Then, the “rolling period” is 144 intervals which comes out to 24 hours. The temporary exposure key is a 16-byte value pulled from the CSPRNG
every time the current interval number becomes a multiple of the rolling period (in other words, daily).
Up to 14 temporary exposure keys are stored on the device at once, meaning inter-user contact can be traced for two weeks.
Rolling Proximity Identifier
First, the 16-byte RPI key is derived by applying the HKDF
to the daily temporary exposure key:
$$
k_{\text{RPI}} = \mathrm{HKDF}\left(
\substack{\text{temporary} \\ \text{exposure key}},
\text{“EN-RPIK”},
16
\right)
$$
The actual RPI is the ciphertext resulting from encrypting the current interval number along with data to pad the input to 16-bytes:1 $$ \text{RPI} = \mathrm{AES}_{k_{\text{RPI}}}\left( \text{“EN-RPI”} \;\big\Vert\; 00 \cdots 00 \;\big\Vert\; \substack{\text{interval} \\ \text{number}} \right) $$
A fresh RPI is generated every time the Bluetooth broadcasting address is randomized (more on this latter part in the next section). This is definitely a good thing; we can essentially coalesce Bluetooth tracking with RPI tracking into the same attack vector.
Metadata
The metadata key is derived much like the RPI key via the HKDF
:
$$
k_{\text{M}} = \mathrm{HKDF}\left(
\substack{\text{temporary} \\ \text{exposure key}},
\text{“CT-AEMK”},
16
\right)
$$
The Bluetooth document highlights the content of this metadata:2
The privacy preserving encrypted metadata shall be used to carry protocol versioning and transmit (Tx) power for better distance approximation. The metadata changes about every 15 minutes at the same cadence as the Rolling Proximity Identifier to prevent wireless tracking of the device.
It makes sense given that Apple and Google want to allow implementors to “fill in the blanks” as they see fit for the rest of the metadata. However, I have a hard time imagining metadata that’s useful to a public health authority that is truly non-identifiable…
The encryption itself is pretty standard. Note that the RPI acts as the IV; this should be safe since it appears as a random 16-byte bitstring (even though it’s a meaningful ciphertext): $$ \text{ciphertext} = \text{AES-CTR}_{k_{\text{M}}}\left( \text{RPI}, \text{metadata} \right) $$
Revealing Information
To summarize, the full payload broadcast over Bluetooth to other devices is the following (encrypted) information: $$ \boxed{ \substack{\text{time} \\ \text{interval}} + \text{metadata} } $$ The idea is that when a user is infected, they voluntarily reveal their recent temporary exposure keys, which would allow a centralized authority to update a global list that it periodically distributes to all users. Then, the users can cross-reference their own history of observed payloads and see if any matched.
Unfortunately, this is where things can easily break down. How does a user reveal their keys and stay anonymous? There’s no such thing as a truly “anonymous tip” on the Internet. By willingly uploading your keys to the “Diagnosis Server,” you are revealing (at least) a good location approximation from your IP address. The location result from this geolocation service is within 3km of my true location.
Fortunately, an approximate location like that is not nearly enough to personally identify an individual. Since these apps would (ideally) be operated by a local health authority, they already know their users’ county.
Bluetooth Broadcasting
The following two points from the Bluetooth specification are the key to keeping the broadcasts hardware-anonymous:
- The advertiser address type is a random, non-resolveable MAC address.
- The address rotation period is a random value between 10 and 20 minutes.
Crucially, both of these features require hardware support, so keep that in mind if you have an older device. If you broadcast your keys on an unchanging Bluetooth MAC address, you are trivial to identify, correlate, and track by anyone.
Attack Vectors
The protocol seems pretty air-tight as far as anonymizing each user’s hardware from other participants aside from a single huge flaw.
Correlating Key Rotations
Suppose you, as a user of the app, see the following Bluetooth beacons in a grocery store:
Tx (Power) | BT address | RPI | Metadata |
---|---|---|---|
42 | AA:12:E4 | 0xABCD | 0x1234 |
78 | BB:65:51 | 0xEFAB | 0x1010 |
56 | CC:13:C3 | 0xA2D4 | 0x2020 |
89 | DD:86:0C | 0x04D3 | 0x3030 |
The data here is arbitrary and inaccurate, but that doesn’t matter. The key comes from the data you see immediately after one of the users decides to roll its keys:
Tx (Power) | BT address | RPI | Metadata |
---|---|---|---|
74 | BB:65:51 | 0xEFAB | 0x1010 |
60 | C2:42:12 | 0xDEAD | 0x5186 |
88 | DD:86:0C | 0x04D3 | 0x3030 |
38 | EE:51:32 | 0xC4F3 | 0x6194 |
(the adjusted power ratings here are supposed to simulate that people nearby may have moved, but not by much since their last broadcast; suppose the first user is now out of range and the last user is a new broadcaster)
Take a guess at who the “new” user on the second row is… It’s obviously just the previous user from the third row! What are the chances that someone (Bluetooth user CC:13:C3
) completely disappeared and was replaced by a new person with a similar transmission power (user C2:42:12
)? Extremely unlikely.
This correlation creates a trivial vector for associating users and tracking them across rolls of their identifiers. Remember, rolling of keys and addresses happens at random intervals between 10 and 20 minutes for each user according to the specification. That means that rolls will not happen simultaneously. Even if they did, inaccuracies in the local time of each device still creates a window in which to create this correlation if you act fast enough.
Even if this is somehow addressed, there are also a few minor pain points that come to mind.
The Human Element: Deduction
Consider Bob, an uninfected user who went out a single time today to pick up food from the In-n-Out drive through. Later that night, he checks the contact tracing app and gets a notification that he’s been near an infected individual. I wonder where he encountered them?
This is even more problematic at scale: users are cross-referencing known infections to all of the Bluetooth broadcasts they’ve captured locally. Yes, they may be limited to 14 days of infected key data, but finding correlations across more than one day dramatically lowers the pool of people they interacted with. The chance of running into any random person more than once in two weeks is pretty low…
This obviously isn’t privacy-preserving, and unfortunately the mitigation—delaying the release of infected keys—can be detrimental to the public health response. It’s worth keeping in mind that this protocol does its job best when we do the exact thing we’re not supposed to be doing: spend time near many different people.
Maybe this isn’t really a problem, though. If someone you know catches the virus after you interacted with them many times, they’ll (hopefully) tell you directly.
The Human Element: Malice
Another problem worth considering is spam. Malicious users could simply spam their Bluetooth with keys and then self-report as infected, creating a sleugh of false positives. This would require a coordinated effort to be impactful at scale, though.
Deanonymizing BLE
Unfortunately, there’s a wealth of attacks out there such as this one that allow tracking of anonymous Bluetooth devices. Furthermore, the realities of the physical world are unavoidable: using things like signal strength to triangulate people in a sea of Bluetooth beacons is something retailers have been doing for a while (you know how you can unlock your Mac just by getting close with your Apple Watch on? yeah).
This problem is largely inherent to the hardware itself, though, and isn’t a explicit fault of the protocol.
Summary
Overall, I’m impressed with the approach taken by Apple and Google. Thankfully, Bluetooth hardware has progressed to a point that makes this possible. The idea of a random, non-resolveable Bluetooth address is at the core of all of this.
Unfortunately though, the complexity of time sychronization in a distributed system creates an unavoidable attack vector in which attackers can correlate users across key updates because of the key update… This completely negates any semblance of privacy because now anyone observing the Bluetooth signals (at a sufficient scale, granted) can track users.
There are further issues with the human element as described above, and sophisticated attacks that apply any Bluetooth Low Energy device, but they are minor compared to the gaping hole time-based correlation creates.
This is a fantastic effort towards both keeping the public informed and enabling future epidemiological study, but not one I’ll be participating in.
-
The syntax
a || b
is common in cryptography to represent bit-string concatenation. ↩︎ -
There’s a bit of a discrepancy here: the quote claims the metadata is updated every 15 minutes, yet the RPI claims to update every 10 minutes. This is just a documentation error, since the Bluetooth spec clearly states that “the advertiser address, RPI, and metadata shall be changed synchronously so that they cannot be linked.” The true rotation interval is randomly chosen to be within 10 and 20 minutes. ↩︎