"cannot provide information we do not have"
Posted on May 14th, 2016
The Intercept recently covered how a judge was punishing a secure-communications application. The title of this post comes is a quote from representatives of the application in question.
I continue to press the idea that our systems designs need to architected to be resistant to “rubber hose cryptography". In other words, they need to be resistant to coercive pressures targeting users of our software (regardless of whether those pressures are criminal, political, or (frequently) both). When we design systems, we need to consider if it is possible to be able to rightfully claim that we “cannot provide information we do not have", "cannot make make that software change in secret, because the client software would reject it as an inauthentic change", and/or “cannot affect our users in that way, because we do not have that capability in the system".
Further, I believe our designs of important systems should consider whether a “duress mode" is possible. A duress mode is an option that allows someone in an adverse situation to appear to comply with their coercer, while allowing the would-be-victim to maintain some safety or control. Duress mode credentials, for example, can do one or more of the following:
- Silently call for help
- Partially or wholly erase sensitive data
- Partially or wholly disable the system
- Provide a false view of the system or an account with limited permissions
- Start recording and sharing environmental video, environmental audio, or system usage
But, as the quoted post helps make evident, making these user-protecting decisions can be a brave act. Governments may try to fine us, put us in jail, or find other methods of punishing us for not cooperating with sabotaging our systems. Every time we act bravely to protect our users, fewer solitary heroes will have to take a stand.