The Identity Corner

A primer on user identification – Part 3 of 4

The second part of this primer on user identification examined the privacy and security implications of self-certified user identifiers for both user and relying parties. In short, self-generated identifiers provide privacy and security for users vis-à-vis relying parties, but offer no security for relying parties vis-à-vis users. Here we examine the other traditional approach to user identifiers: certified user identifiers.

Certified user identifiers are user identifiers that are “endowed” by (or on behalf of) relying parties with security features and optional attributes. (To certify means to give out a formal attestation to the truth, accuracy, and genuineness of something.) Typical security features of a certified user identifier are:

  • The format of the identifier has been approved in some sense.
  • The user of the identifier has been approved in some sense.
  • The identifier is unique within the specified context.
  • The identifier cannot be copied (“cloned”).
  • No more than x certified identifiers have been issued to the user.
  • The user identifier cannot be transferred to other parties (non-transferability).
  • The “real identity” of the user has been associated with the identifier.

Typical identifier attributes are: expiry date; maximum number of authorized uses; designated use context; designated purposes; details about the strength of the certification process; and user-related information. To bind attributes to user identifiers, they can either be certified along with the user identifier itself or be stored in an online account that is indexed by an account pointer that is certified along with the identifier. The latter approach allows for dynamic attributes as well as for private attributes. The former approach is suitable only for static overt information, but enables relying parties to verify attribute information off-line. Static attributes typically fall in one of the following categories: additional information about security features; use limitations for the identifier (in effect these are additional security features that are enforced differently); and user-related information (e.g., community membership, entitlements, and negative statements).

Security of relying parties.

Traditionally, the certification of user identifiers is accomplished by specifying them on paper documents, plastic cards, or other identity tokens that carry authenticity marks believed hard to replicate. Users are required to present these identity tokens at presentation time, so that relying parties can verify the authenticity marks. Physical authenticity marks such as seals, handwritten signatures, intaglio printing, special paper, watermarks, security threads, color-shifting ink, and holograms are widely used to protect passports, health insurance cards, driver’s licenses, employee access cards, credit cards, debit cards, and other widely used identity tokens.

For non-transferability, a description of one or more biometrical characteristics that uniquely identify the user must be certified along, so that it can be compared at presentation time to a fresh biometric scan taken from the user. For example, photographs, fingerprints, and/or other biometrical clues that are securely imprinted on plastic identity tokens can be verified by relying parties at presentation time. To prevent a user from simply replaying a biometric scan of another user, users must be in physical proximity to relying parties. (Alternatively, in the special case of a voice scan, a remote user can be asked to verbally repeat a random “challenge” message; this assumes that relying parties can rule out copy and paste attacks.)

In the case of electronic user identifiers, paper and plastic identity tokens are replaced by chip-based devices. Authenticity marks attesting to security features and any attributes can be established as follows:

  • If relying parties trust the tamper-resistance of the user device, the user identifier and any attributes can be stored in the device together with a secret key known to the relying parties. The tamper-resistance of the device enforces the desired security features, and the device key is used to authenticate presented user identifiers by attaching a message authentication code.
  • If relying parties do not trust the tamper-resistance of the user’s device, the user identifier (and any additional attributes) should be certified cryptographically by attaching a message authentication code or digital signature on the user identifier and the associated attributes. Cryptographic certification cannot prevent transferability, however.

In practice, both methods may be combined to ensure a degree of fallback security should one of the two mechanisms be compromised.

Non-transferability of certified electronic identifiers of can be ensured by applying the same techniques as for non-electronic identifiers. In the case of tamper-resistant user devices, the comparison at presentation time can be done by a built-in biometric scanning component, so that the user does not need to be in physical proximity to the relying party; this requires the secure integration into the user device of a sophisticated tamper-resistant scanning component that can detect replay attempts.

Relying parties can also get users to digitally sign transaction details, instructions, and any other actions they perform when presenting their identifiers; this requires user identifiers to be bound at certification time to user-generated asymmetric key pairs. At presentation time, users must present their identity token together with their public key, and use their private key to sign messages. X.509 certificates are a well-known implementation of this notion of digital identity certificates.

Security and privacy of users.

While certified user identifiers offer protection to relying parties vis-à-vis users, they offer no security for users: relying parties can impersonate them (identity theft) by forging identity tokens, can cause their identifiers to be hooked up to wrong account information, and can falsely blacklist them. As well, users have no privacy vis-à-vis relying parties: they can link and trace all user actions by simply comparing presented identity tokens against what was disclosed at certification time.

In small-scale single-domain contexts, users may not be concerned about these powers. Notably, if there is only one relying party, single-domain certified identifiers do not create any serious security concerns for users, and privacy needs boil down to anonymity and pseudonymity interests vis-à-vis that relying party. More generally, when different domains all rely on their own identity token mechanisms, any damage that a relying party can do to a user will be confined to its domain. When the same user identifiers are relied on by growing numbers of relying parties, however, the situation drastically changes.

The final installment of this primer will examine how technological advances of the past decade are causing more and more relying parties to rely on the same strongly certified single-domain user identifiers for more and more purposes. As traditional single-domain identifiers are gradually converted into de facto universal identity tokens, the historical segmentation of activity spheres and trust domains is rapidly vanishing. This evolution has extremely severe privacy and security consequences for both users and relying parties; as we shall see, the simplistic old-school two-party threat model of relying parties versus users (which is entirely valid in a world consisting of fully segmented trust domains) no longer holds in the new cross-domain world.

February 25, 2005 - Posted by | General

No comments yet.

Leave a comment