Security Aspects of Distance-Bounding Protocols

Authentication protocols, run between a so-called prover and a so-called verifier, enable the verifier to decide whether a prover is legitimate or not. Such protocols enable access control, and are used in e.g.~logistics, public transport, or personal identification. An authentication protocol is co...

Full description

Bibliographic Details
Main Author: Onete, Maria Cristina
Format: Others
Language:English
en
Published: 2012
Online Access:https://tuprints.ulb.tu-darmstadt.de/3040/7/ThesisMain-final.pdf
Onete, Maria Cristina <http://tuprints.ulb.tu-darmstadt.de/view/person/Onete=3AMaria_Cristina=3A=3A.html> (2012): Security Aspects of Distance-Bounding Protocols.Darmstadt, Technische Universität, [Ph.D. Thesis]
Description
Summary:Authentication protocols, run between a so-called prover and a so-called verifier, enable the verifier to decide whether a prover is legitimate or not. Such protocols enable access control, and are used in e.g.~logistics, public transport, or personal identification. An authentication protocol is considered secure if an adversary cannot impersonate a legitimate prover. Such an adversary may eavesdrop authentication attempts between a legitimate prover and a legitimate verifier, interact with either of the two honest parties, or perform a man-in-the-middle (MITM) attack, but without purely relaying messages between the honest parties (see Figure~\ref{fig:models} (a)). Distance-bounding is a feature that enables authentication protocols to also withstand MITM \emph{relay attacks}, where an adversary forwards information between the prover and verifier such that neither honest party is aware of the attack. The goal of the adversary is to be authenticated by the verifier as a legitimate prover. In practice, the adversary consists of two parties, a \emph{leech}, which impersonates the verifier to the prover, and a \emph{ghost}, which impersonates the prover to the verifier. This is also depicted in Figure~\ref{fig:models}. Following the initial paper by Desmedt~\cite{Des88}, pure relay attacks are called \emph{mafia fraud}. \begin{figure}[H] \centering \includegraphics[height=5.0cm]{Presentation2.pdf} \caption{Adversary models in (a) authentication and (b) mafia fraud.} \label{fig:models} \end{figure} In distance-bounding protocols, introduced by Brands and Chaum in 1993 \cite{BrandsChaum93}, a clock is mounted on the verifier, such that it can measure the time-of-flight between sending a challenge and receiving a response. Following the idea that pure relay introduces a processing delay in the MITM adversary, a verifier now compares the measured time-of-flight with a pre-set value \tmax\ (in practice, an upper bound associated with the maximum trusted communication distance). If the communication speed is constant (and very fast), the verifier authenticates the prover if (i) the verifier is convinced that the prover is legitimate, and (ii) the prover is within the maximum distance associated with \tmax. Such protocols were recently implemented by Rasmussen and \Capkun~\cite{Rasmussen2010} and Ranganathan et al.~\cite{Ranganathan2012}. In most distance-bounding protocols in the literature, the prover-verifier communication is run in multiple rounds, or \emph{phases}, which are classified as either \emph{time-critical} if the verifier's clock measures the roundtrip time, or \emph{lazy} if the clock is not used. Note, however, that e.g.~the protocol due to Rasmussen and \Capkun~\cite{RC07} is not round-based. There are three classical attacks that distance-bounding protocols should in general address: \begin{enumerate} \item \textbf{Mafia Fraud:} Here, an adversary attempts to authenticate to the verifier in the presence of an honest prover (however, the verifier's clock prevents pure relay). Both honest parties are unaware of the attack. \item \textbf{Terrorist Fraud:} Here the dishonest prover provides some limited help to the adversary, such that the adversary is able to authenticate to the honest verifier. However, the prover should not forward any information that allows the adversary to authenticate without the prover's help. Intuitively, a terrorist fraud attack is successful if the adversary is successful in authenticating with the prover's help, but not without it. \item \textbf{Distance Fraud:} The adversary in this scenario is a dishonest prover placed far from the verifier (i.e.~outside the range associated with \tmax). The goal of the adversary is to authenticate, thus fooling the verifier's clock. \end{enumerate} A fourth security notion (suggested much later by Avoine and Tchamkerten \cite{AvTcham09}) is that of classical impersonation security, where the adversary interacts with either the prover or the verifier (but not in a MITM attack) and wins if it authenticates successfully. This attack is particularly dangerous for implementations on resource-constrained devices, e.g. RFID tags, where the provers (tags) only support a small number of time-critical rounds. Thus, the mafia fraud resistance of the protocol (which hinges on the number of time-critical rounds) is very low. The main contribution of this thesis is to introduce a unitary, three-layered, formal framework for proving security of distance-bounding protocols. The main layer of this model is the central layer and captures a static model, where the prover and verifier share a secret key, which is never updated. In this context, we define the fundamental threats to distance-bounding protocols, namely: mafia, terrorist, and distance fraud, and impersonation security. Additionally, we take a closer look at mafia and terrorist fraud resistance, extending mafia fraud to capture prover aborts, and describing three flavours of terrorist fraud resistance, depending on how much information a dishonest prover is willing to pass to the adversary. The two remaining layers of our model discuss privacy aspects of distance-bounding protocols: in our top layer we include location privacy, whereas our bottom layer concerns privacy in authentication. A second very important contribution is to describe two constructions: the first provable-secure terrorist fraud resistant, and the first provably location-private distance-bounding protocol in the literature (see more details below). Also very important are the three compilers we describe, which enable black-box transformations from either one tier of the main framework to another, or from one degree of mafia fraud resistance to another. More concretely, we proceed in the following directions. \paragraph{Fundamental Model.} The central, fundamental layer of this framework includes the four standard attacks enumerated above, extending and formalizing previous work due to Avoine et al. \cite{AvoBinKarLauMar09}. The models account for exact bounds for the levels of mafia, terrorist, and distance fraud resistance, and for impersonation security of any round-based distance-bounding protocol; having compared the notions, we prove them to be independent, contrary to popular belief and to the results of \cite{AvoBinKarLauMar09}. As this is the fundamental layer of our model, we consider it paramount to assess the security properties of several distance-bounding protocols in the literature. We choose the more prominent schemes of Brands and Chaum~\cite{BrandsChaum93}, Hancke and Kuhn~\cite{HanKuhn05}, Avoine and Tchamkerten~\cite{AvTcham09}, Kim and Avoine~\cite{KimAvo09}, Reid et al.~\cite{ReidGonzTangSen07}, Bussard and Bagga~\cite{BusBagga05}, and the Swiss-Knife protocol due to Kim et al.~\cite{KimAvoKoeStaPer09}. In so doing we give a generic attack against the terrorist fraud resistance of the schemes in~\cite{BusBagga05,KimAvoKoeStaPer09,ReidGonzTangSen07}, showing that they attain a different, in many ways more relaxed notion of terrorist fraud resistance. The central layer of our model considers static keys only. However, we extend this scenario to also include privacy aspects, both in authentication (the bottom layer of our model) and with respect to the prover's location (the upper layer of the model). Here, we define privacy in authentication in the sense of Vaudenay~\cite{Vau07}, i.e.~distance-bounding sessions should be unlinkable. The adversary in this scenario is also allowed to corrupt provers and learn their internal states (i.e.~the secret key); depending on the adversary strength, the adversary may then be able to interact with other provers or not. If the adversary is unable to learn the result of an authentication protocol between a prover and the verifier, we speak of \emph{narrow} privacy; else, we speak of \emph{wide} privacy. Though Vaudenay's model is well studied for RFID authentication, it has not been applied to distance-bounding. In order to achieve privacy, we extend the fundamental, static model in our central layer to account for key updates, such that the prover and verifier use different keys to generate their responses during each protocol run. We also give compilers which preserve the static-key model properties of a scheme (i.e. their levels of mafia and distance fraud resistance, and impersonation security) in the key-update model, furthermore transforming narrow-weak private distance-bounding protocols (in the sense of \cite{Vau07}) into narrow-destructive private schemes. Recall that narrow privacy refers to the fact that the adversary does not learn the verifier output for protocol runs. Note that in the symmetric-key scenario this is the most one can achieve, since \cite{Vau07} proves that narrow-strong privacy requires key agreement. In our compiler, the prover updates state early, after an initial lazy authentication phase, whereas the verifier updates state only after authenticating the prover. Furthermore, the verifier can ``catch up'' with the prover by means of an initial state-recognition phase. However, in order to ensure that the verifier does not enter an infinite iterative process, resulting in a denial of service (DoS) attack, we split the prover state into two parts, one which is updated at each authentication attempt, and another which is only updated upon verifier authentication. This latter state fragment then enables the verifier to bypass DoS attacks. The upper layer of our model, location privacy, was already addressed in the literature by Rasmussen and \Capkun~\cite{RC07}. However, they did not formally define their security model and in fact, we can prove that their proposed scheme is \emph{not} location private in our definition. Note that, as opposed to authentication, where location privacy is generally used to mean that no location data is leaked by provers in authentication sessions, in distance bounding provers necessarily leak one piece of information, namely whether they are in the verifier's proximity or not. For location privacy we require that no further information can be leaked from provers which are within proximity of the verifier. We show an impossibility result: namely that location privacy cannot be achieved in an information-theoretical sense, and furthermore it cannot even be computationally attained for strong adversaries, which are in practice able to triangulate signals and gauge signal strength. We also show how to achieve location privacy in a weaker model, at the cost of a larger communication complexity; in particular, our construction is a modification of the construction in \cite{RC07}. We stress that we view distance bounding as an extension of authentication. In particular, we consider mafia and distance fraud resistance (as well as impersonation security) to be basic requirements of distance-bounding protocols. This contradicts the idea used by Rasmussen and \Capkun\ in their construction~\cite{RC07}: indeed, their protocol only attains distance fraud resistance. Our location private protocol in fact makes a modification in the original scheme in~\cite{RC07}, so as to thwart a mafia fraud attack against this scheme. We also describe this attack. \paragraph{Mafia and Terrorist Fraud.} Having completed the three-tiered description of our model, we also look more in depth at mafia and terrorist fraud resistance. We show that, unlike previous frameworks, our model also captures so-called key-learning attacks, where the adversary interferes in a few time-critical rounds of otherwise honest prover-verifier communication, with the goal of learning some bits of the secret key. Such attacks are known for authentication, but they have never been considered in the distance-bounding setting. We also model an extension of key-learning attacks, which could appear in practice in the case of prover aborts. For the latter notion of terrorist fraud resistance we elaborate on our previous definition of terrorist fraud resistance. We explain that though most protocols in the literature do not attain this strong, simulation-based notion, our notion \emph{can} be achieved (and we describe a protocol that achieves it). We also define a game-based flavour of terrorist fraud resistance, which is the one most commonly achieved by protocols in the literature. The difference between the notions is roughly this. In our simulation-based model the prover may leak some information about his secret key to the adversary, as long as this information does not enable a simulator to authenticate \emph{with equal probability} in the future. In practice, the prover can then choose to leak only very little information about the secret key, thus ensuring that (i) the adversary does authenticate while the prover helps, and (ii) once the prover stops helping, the adversary's probability to succeed drops. By contrast, in the game-based mode, the prover is unwilling to leak \emph{any} information to the adversary: thus, the adversary is far more limited. More in detail, for the topic of mafia fraud resistance, our model extends previous mafia fraud models to account for key-learning attacks as outlined above. Such attacks are particularly useful against protocols aiming to achieve terrorist fraud resistance. We also introduce an extension to the mafia fraud definition (called strong mafia fraud \strMaf) to account for adversaries that can hijack honest authentication sessions and thus authenticate from an aborting prover. We also show that \strMaf\ security is strictly stronger than mafia fraud resistance. We also note that the simulation-based model of terrorist fraud resistance \ourSim\ is, on the one hand, very strong (in fact we prove that some prominent schemes in the literature addressing this attack are insecure in our framework). On the other hand, however, this model does not cover all adversarial capabilities (since the prover only interacts with the adversary offline). Thus, we introduce both a stronger simulation-based notion \strSim\ and a more relaxed, game-based notion \gameTF\ security. The former extension provides an even higher degree of security, whereas the latter allows for more efficient constructions (while at the same time eliminating obvious attacks). We prove the well-known construction of Bussard and Bagga \cite{BusBagga05}, which is \ourSim\ \emph{in}secure, to be \gameTF\ secure. In fact, most protocols in the literature claiming to achieve terrorist fraud resistance, are actually \gameTF\ secure, and not \ourSim\ secure. We argue that whereas the \gameTF\ notion seems to achieve a minimal degree of terrorist fraud resistance, it is very restrictive and may enable attacks, particularly in scenarios where the dishonest prover is prepared to yield a few bits of the secret key if the result is that the adversary can then authenticate. \paragraph{Constructions and Compilers.} As far as constructions are concerned, we (i) show the first provably secure \ourSim-secure protocol, and show that it is also \strSim-secure, and (ii) we design the first provably-secure location private protocol in the literature. Furthermore, we show compilers to turn mafia fraud resistant schemes into \strMaf\ secure constructions. Moreover, our compiler also works if the underlying protocol need not be fully maifa fraud resistant, but could be susceptible to key-learning attacks (which is a particular form of mafia fraud). This compiler uses a final lazy authentication step, which prevents adversaries from hijacking aborted prover-verifier sessions. Both compilers can be run, with several optimizations, on most distance-bounding constructions in the literature. We furthermore show several other handy tricks, both in the modelling department (where we show how to prove various notions in our model) and for construction purposes (we show how to easily achieve impersonation security and how mutual authentication can be used towards delegating some computational burden on the verifier). Future topics include (i) modelling extended scenarios for distance-bounding protocols, with multiple provers and a single verifier, or multiple provers and multiple verifiers, formalizing previous work by \Capkun\ et al. \cite{CapDefTsu11}; (ii) investigating how far ou