Distributed Matter - Blog

To content | To menu | To search

Tuesday, October 14 2008

Comparing Web of Trust and Hierarchical PKI for FOAF+SSL

I've been talking with Henry Story about SSL and security-related issues, following the publication of his blog entries: "RDFAuth: sketch of a buzzword compliant authentication protocol" and "FOAF & SSL: creating a global decentralised authentication protocol". Our discussion more or less started when I was making the first release of jSSLutils, and I'm also interested in authentication mechanisms in general. Henry got a few of us together to talk about this.

The idea behind Henry's proposal is to provide a secure authentication mechanism based on FOAF (Friend-of-a-friend) files and public key cryptography. It follows some of Dan Brickley's work; there's also a prototype server implementation by Pipian. One of the aim is to replace hierarchical Public Key Infrastructures (PKI), which are centralised, with something more flexible. Indeed, hierarchical PKIs rely on a Certificate Authority (CA) to assert the identity of a user or a server. This is typically how most people use HTTPS web-servers: you trust your bank's website because its certificate can be verified against a CA certificate which is trusted by your browser. What is perhaps less-widely known is the use of client-side certificate authentication. In this case, not only the server presents a certificate to the user, but the user also presents a certificate (for which he/she has a private key) to the server. If the server trusts the CA certificate that issued the certificate of the user, then it's a valid form of authentication. Again, this requires the user to have been delivered a certificate signed by a CA that the server trusts.

The most difficult aspect of establishing a Certificate Authority isn't the technical one: the main difficulty is in the legal and administrative process whereby the authority operates and delivers certificates. There are a number of commercial CAs (Verisign, Thawte, ...) which most browsers trust by default: there certificates are already in the browser (or underlying software) when you obtain it. The price required to be delivered a certificate vary depending on various attributes that can be in the certificate (for example code-signing), on the CA, and on how far they've actually been to check that the users are who they say they are. Some institutions also provide this service for free. For example, the UK e-Science CA can provide certificates to more or less all UK academics; despite being free for the user, the procedures are relatively thorough, since you have to turn up in person to a local approved CA representative, who will check your passport or similar proof of ID. This is not something that all commercial CAs do. It would be tempting to assume that the more someone has paid a CA for a certificate, the more certain you can be of their identity; this is not necessarily the case (although some people have a vested interest in making you think so).

A FOAF+SSL authentication mechanism would make it possible to avoid depending on a small number of CAs, and instead relies on a FOAF network to assert identity. This works along the lines of a Web-of-Trust (WoT) model, like OpenPGP, which can provide certain advantages (and possibly its share of problems).

Making this work in practice

You can find more details on Henry's blog. We want to use this to provide access control on the (semantic) web, using SSL as an underlying authentication mechanism. (As a first step, it's easier to focus on client-authentication using this method; we'll still use hierarchical CAs to assert the identity of a server.)

The way SSL client-side authentication works in most servers is by configuring a set of trusted CA certificates (the trust store in Java); this follows the X.509 PKI model of trust. If the client presents a certificate issued by a CA the server trusts, then the server authenticates the client. In most systems, the validation of the client certificate is done by the SSL library, which will reject it and most likely close the connection if cannot find a chain to a trusted CA certificate, before any application data is exchanged. Luckily, the way these underlying SSL libraries perform this verification can be customised.

What we've done (and there's some sample code in the Sommer project code repository) is "converting" an OpenPGP key-pair (which is in fact a form of certificate) into an X.509 certificate that can be used directly. It's just a self-signed certificate based on the OpenPGP key material: it contains the OpenPGP public key and is signed by the corresponding private key. If the client presents this certificate to a server during the SSL-handshake, this proves to the server that the client has the corresponding private key. Trusting what that certificates represent may then be handled differently, by customising the way trust is managed within the server. Of course, the X.509 certificate generated this way can also contain other pieces of information, including Subject Distinguished Name (Subject DN) and time-validity. We're still experimenting with this, and none of this is standardised, but the main extension we've been using is the "subject alternative name", which we've set to be the URI identifying the user; this fits the FOAF model quite well. We then tell the underlying SSL library to trust any client certificate, letting the layer above perform the trust verification (this can for example be done in Java using the TrustAllClientsWrappingTrustManager of jSSLutils).

Another approach, which has recently been published as an experimental RFC 5081 consists of extending TLS to support not only X.509 certificates but also PGP certificates. I think our approach is easier at the moment, since the approach in RFC 5081 would require changes in the existing SSL/TLS libraries (changes which I haven't seen, and which I doubt will happen in the short term for rather well-established projects such as OpenSSL or the JDK). The problem with our approach is that we move the evaluation of trust up to the layer that uses SSL, which means it has to be handled explicitly at the moment (of course, there's nothing preventing us from providing a library or similar module for this). I also reckon that the web-of-trust model can be a bit harder to evaluate in some cases, and this might require more interaction and configuration than the X.509 model had (just providing CA certificates and perhaps CRLs). It's anyway a more convenient way to experiment with this, rather than having to wait for RFC 5081-compliant implementations.

How to model trust

The hierarchical PKI model (X.509) is fairly simple to evaluate. The network of trust can be modelled as a tree, the root of which is the CA certificate; the chain is built between the leaf (the user certificate) to the root of the tree. This is also because CAs come with policies that specify which certificates intermediate CAs are allowed to sign so that the chain is valid. Although it has the inconvenience of being central (and thus concentrates power in the CA), it is relatively straightforward to understand, implement and evaluate.

If we want to use a Web-of-Trust model, we need to provide a new way to evaluate trust, and to model this in the FOAF extensions. (At this stage, I should point out that I know very little about FOAF/RDF/ontologies, but I'm planning to learn about all that very soon.)

There's a good description of how this can be done in Walking the Web of Trust, by Germano Caronni.

Another problem is that, in the CA model, a root CA or any intermediate in the chain is something for which:

  • you trust its identity, and
  • you trust its ability to perform the necessary steps to check and assert someone else's identity.

There are usually legal documents and policies in certificate authorities that define these agreements.

One must be quite careful in a Web-of-Trust model to make sure that this distinction is integrated in the function that evaluates trust. Trusting someone's identity and trusting someone's actions are rather distinct things. On the one hand, this can bring more complexity; on the other hand, this can bring more power to the model. For example, we could integrate more information than just asserting someone's friendship in a FOAF file, but add domains of expertise for which a server could choose to trust a user only if the model trusts them enough to be qualified to perform the task.

How to revoke trust

Problems with keys happen; we need to anticipate them. I think there are two main reasons for using Certificate Revocation Lists (CRLs) in the PKI world:

  1. the private key of a user has been lost or compromised,
  2. the user has done something bad and is no longer to be trusted.

The first problem is a matter of propagating the revocation to whoever has the user in their FOAF file. It's likely that he/she had them in his/her FOAF file, so it seems feasible. Similar things could happen if the keys of a CA were compromised or otherwise lost. Such a scenario would cause disruption, but it could be contained.

The second problem is really an authorisation issue, in fact, in a hierarchical PKI. The identity of the user could still be valid, thus authentication would work well, you would just want to deny authorisation. In the case of Web-of-Trust, this can be a bit more tricky, since you may have to re-evaluate the assertions you've made about his friends. Put it this way: is the friend of your former friend who betrayed you still your friend after that? I'm not sure there's a right answer to this, but I doubt this type of case can be handled automatically.

It's probably not a bad thing to re-assess the content of the FOAF file regularly, as Henry suggested, but this will demand actions from the user, and thus appropriate user education. In my experience, the problems with security are not as much technical as they're a matter of educating users and providing them with something easy enough to use. Otherwise, they might just not use your system securely or won't use it at all.


I'm not sure there is a right answer between Web-of-Trust or hierarchical PKI. It really depends on your working environment and on how easy it is to set up a PKI there. The advantages of the Web-of-Trust combined with FOAF are appealing: more flexibility, less dependence on a central system. Perhaps the two can be used together for two different usages: after all, you don't necessarily introduce your friends by showing their passport. This is only the beginning. I think the challenge is now to provide a suitable formula to evaluate trust, and to explain it to the users accordingly.

Monday, June 9 2008

HTTP authentication mechanisms (and how they could work in Restlet)

Updated 08/08/08

I'm currently addressing how access control is done in the project I'm working on. We're using Restlet, so I've been looking into its authentication and authorisation mechanisms. There were a few issues with the current API for Guards, as I wanted to use mechanisms that are not currently supported. Thus, I started a discussion on the Restlet mailing list by making a few comments and suggestions about the Guard class. The Restlet community is really dynamic and changes tend to be integrated rapidly into the source code, which is really pleasant. However, we all seem to agree that this particular problem deserves more substantial consideration than just a few quick patches. Here are a few thoughts I've had on the subject...

Guarding access to a resource involves two steps:

  • authentication, which consists of ensuring the user is who he/she claims to be and that the credentials presented are authentic (for example, you can be authenticated because you can prove you know your password), and
  • authorisation, which consists of making a decision as to whether the authenticated user may or may not perform an action.

I'm focussing first on authentication, since this is really what influences the external interfaces of the system: what happens between the client and the server. In contrast, authorisation happens after the client has been authenticated and can be done by a third party. This being said, some authentication mechanism blur the line with authorisation, especially when some authorisations are granted or delegated by a third-party.

In the following, I'm trying to categorise the various pieces of information and mechanisms that can be used for authentication, in particular, authentication at transport level and authentication at HTTP-level. This is very HTTP-centric, even if Restlet can be used with other protocols. If you know of other mechanisms or spot something wrong, please let me know.


Authentication at transport-level

This category of authentication mechanism is not strictly related to HTTP, but HTTP sits on top of this layer and web applications can, in many cases, make use of information from the underlying transport mechanism.

SSL and TLS authentication

Secure Socket Layer (SSL) and Transport Layer Security (TLS) provide mechanisms for peer authentication at transport-level. In almost all cases, unless an anonymous cipher is used, the server authenticates itself to the client by presenting an X.509 certificate corresponding to the host name, which the client may choose to trust. (Anonymous cipher suites, which do not require the server to present a certificate, are prone to man-in-the-middle attacks.)

X.509 client-certificates

The client can present a certificate (or chain thereof) to the server. This procedure is initiated by the server, which may also be configured to accept the absence of a certificate even if it has been requested. In short, there are three ways to configure the server for client-side authentication (here, following the Tomcat configuration keywords):

  • none: no certificate request, so client-side authentication is disabled,
  • want: a certificate is requested, but the server will still establish the connection if the certificate response is empty, so this corresponds to setWantClientAuth in Java,
  • need: a certificate is requested and the server will require a valid, non-empty certificate response to establish the connection, so this corresponds to setNeedClientAuth in Java.

These options configure the SSL handshake, which takes place before any HTTP data is exchanged. In practice, if SSL client authentication is to be used as one of several possible authentication methods, or if only some resources on the server are protected by a mechanism that requires SSL client authentication, the preferred option should be to want a client-certificate. Indeed, if the server needs a certificate, failing to provide one implies that the connection will not even be established (which means that HTTP cannot be used to send an "access forbidden" error message of any sort): this is not very "polite".

Kerberos

Kerberos is a network authentication protocol which is secure. It allows two parties to authenticate each other by using a third party they both trust: the KDC (key distribution centre). It's totally independent of SSL. However, there are some Kerberos-based cipher suites, which are for example available in Java 6, so they could also provide Kerberos authentication at transport-level.

I have done a few quick tests in Java: SSLSession.getPeerPrincipal() does return a instance of KerberosPrincipal, the name of which is the Kerberos principal of the client, as expected. The client that I've tried was written in Java and using more or less the same JAAS settings as the server. The only other clients that I've tried were Firefox, Opera and Safari; unfortunately, they do not support these cipher suites, which makes it unrealistic for deploying a system that would be used from a browser. (This type of Kerberos-based authentication is not related to SPNEGO, which is better supported by most browsers and also works from Java clients.)

FOAF & SSL

A few weeks ago, Henry Story talked about his idea of secured FOAF on the Restlet mailing list. In short, Henry was interested in using a web-of-trust model using SSL and I've made a few suggestions so as to use OpenPGP keys in X.509 certificates. The implementation of this is available in the Sommer project and this particular feature uses jSSLutils. This is a very interesting project and I think FOAF&SSL based authentication could provide new authentication models, perhaps with various degrees of trusts that would depend on the way peers are related in the web of trust.

Client IP address and domain name

Although this does not strictly-speaking authenticate the client, but rather the machine from which the requests come, IP addresses and their reverse DNS names could be considered as authentication information which may then be used to make an authorisation decision. This is frequently used to authenticate someone as an intranet user.


Authentication at HTTP-level

The following authentication mechanisms make use of the HTTP protocol.

WWW-Authenticate and Authorization headers

HTTP provides a WWW-Authenticate response header which may be used to challenge the client to provide suitable authentication information, via the Authorization request header.

When access is not authorised, a 401 Unauthorized response status code is returned. There may be multiple WWW-Authenticate challenges in a single response; it is then up to the client to use one of them and provide its response in the Authorization header. This may be confusing, but most of the time, the Authorization header contains authentication (not authorisation) information.

Basic

HTTP Basic authentication is probably the simplest username/password authentication mechanism in HTTP. It relies on a realm, which is a name usually displayed by the browser when the user is prompted for his/her password. It is probably the simplest to implement, since the client then only has to provide the credentials as as string like username:password in clear, encoded in base 64. Since the server obtains the password in clear, this password can be relayed to a third-party authentication provider to be verified, such as an LDAP server. Other authentication providers include encrypted passwords in files (htpasswd in Apache) or in databases. Of course, this would also work with authentication providers that disclose password. HTTP Basic authentication can also be used pre-emptively, that is, when making the first request to the server, without being challenged with a "401" response.

Digest

HTTP Digest is an improvement over HTTP Basic because the password is not transmitted in clear between the client and the server. The downside of this mechanism is that it requires the server to be able to know the secret password. This is often impractical in environments where the password is really only known by the user (any mechanism that obfuscates the password) or where the authentication cannot reveal the password even if it has access to it. (It could be risky to let an LDAP server give out passwords.)

I personally tend to prefer Basic over SSL rather than Digest for two reasons:

  1. I use authentication providers that do not give access to the password in clear (LDAP or non plain-text htaccess).
  2. I find it hard as a user to feel safe entering my password in a Digest dialog box... which looks just the same as the box you get with Basic authentication; at least, with HTTPS, I'm warned when I switch to an un-enciphered page.

Negotiate (SPNEGO)

SPNEGO (Negotiate scheme) is an authentication mechanism that negotiates the use of an underlying GSS-API. When used in conjunction with HTTP, it uses the Negotiate scheme. It is mostly used to use NTLM or Kerberos (via GSS), but I think it could by used with other GSS mechanisms if required.

I've experimented with this for Restlet (on the server-side) and managed to get it to work from Firefox, Safari, Internet Explorer, and Konqueror against a Linux-hosted MIT Kerberos KDC. Active Directory can also use Kerberos.

Despite the security of Kerberos itself, it may be wiser to secure the exchange of GSS tokens in the HTTP headers, in particular using a GSS mechanism that supports integrity protection. (Admittedly, this is something I need to check.)

Kerberos

Kerberos can be used directly as an authentication scheme via the Apache Kerberos module (which may also support SPNEGO/Negotiate). I'm not sure if it's really maintained. The corresponding Mozilla plugin seems no longer maintained. Since Kerberos works fine via SPNEGO, I'm not sure why one would use it nowadays.

NTLM

There's an NTLM scheme too. Again, I suppose it should be deprecated nowadays, in favour of Negotiate.

OAuth

OAuth is a rather new HTTP authentication/authorisation scheme. Its purpose is to delegate authorisation. There already is an implementation of an OAuth Guard in Restlet.

Cookies and authentication sessions

Forms

Form-based authentication is probably the most used method of authentication on the web. It tends to make use of a cookie to maintain a session (or some session identifier in the URL). Its main advantage is that it looks prettier, especially when it matters to have a corporate visual identity. It has similar security problems as HTTP Basic, and it's thus better when used over SSL.

In terms of HTTP, form-based authentication usually challenges the client with two response codes:

  • 200 OK, you get the page, but there is an explicit message on the page which says you must log on and give you a link. It's a bit of a shame, since in principle, it could also return a 401 Unauthorized status code in the header and still produce the same explicit page which invites the user to log on.
  • 302 Moved, you get redirected to the login page. Again, a 401 Unauthorized status code could be better, especially for non-browser clients.

This being said, the HTTP specification clearly states that a WWW-Authenticate header must be present when returning a 401 response code. Perhaps there should just be a WWW-Authenticate: Form http://uri/of/the/logon/form scheme.

Shibboleth, Google Single-Sign On and OpenID

To some extent, mechanisms that delegate authentication to a third party such as Shibboleth, Google Single Sign-On Service and OpenID (Kerberos is not it this category because it's supported by SPNEGO headers directly) can be considered as variants of the form-based authentication, where a cookie is established to maintain the authenticated state with subsequent requests. Maintaining a session via a cookie isn't strictly necessary (especially for OpenID), but it seems that it's the way it's usually done, probably by lack of a corresponding WWW-Authenticate scheme.

These authentication mechanisms are mainly designed for browser-based access. In order to get the Shibboleth session cookie, the first HTTP request needs to be a GET, this wouldn't work with other actions. (I believe Shibboleth 2.0 tries to address this, but I haven't looked into it.)

Shibboleth is a mechanism that involves more interactions than a plain form and automatic redirections. There are also connections behind the scenes between servers, but here is what it looks like from the browser point of view:

  1. C->S - GET. The browsers attempts to get http://protected.site/page.
  2. S->C - The server returns a 302 redirection to a "Where are you from?" (WAYF) page, this URI includes the URI of the protected page in a parameter.
  3. C->S - GET http://where.are.you.from/ (automatic after redirection).
  4. S->C - The WAYF page offers a list of identity providers (IdP). The user clicks on an IdP link (usually, that of his/her institution) and is directed to that URI. Again, that URI comprises the URI of the protected page.
  5. C->S - GET http://identity.provider/. The page of the IdP authenticate the user via another means (HTTP Basic, X.509 certificates, ...): this is where the IdP authenticates the user.
  6. S->C - Upon successful authentication, the IdP gives the browser a web pages with a form that contains the SAMLResponse. This form is instantly submitted automatically via POST (because there's a <body onload="document.forms[0].submit()"> on that page).
  7. C->S - POST to http://protected.site/Shibboleth.processor the content of SAMLResponse.
  8. S->C - The server sets a Shibboleth session cookies and sends a 302 redirection to the original protected page.
  9. C->S - GET http://protected.site/page. The Shibboleth authentication has taken place. The Shibboleth-protecting module also inserts automatically a header for processing by the web application it hosts. That header is a set of SAML assertions for authentication and authorisation.
  10. S->C - You're in (depending on the SAML assertions).

This would really look better without the automatic POST (which relies on JavaScript). I hope that, when browser designers realise that these tricks are what causes the CSRF attacks to work, being able to have JavaScript to post a form upon loading the page (to a distinct website) will be more restricted. Unfortunately, this will certainly be a problem for Shibboleth. (Shibboleth could be deployed with a GET instead, the problem is that the SAMLResponse is about 8KB, which is a bit long as a URI parameter.)

This is in principle very similar to the Google Single Sign-On Service. The POST issue could in principle exist with Google SSO.

OpenID also falls in this category, except that the authentication assertion is only about the URI identifier, there's no extra SAML assertions (yet).


What this means in the context of Restlet

The current Guard class in Restlet is currently built using a ChallengeScheme. This is well suited for the use of WWW-Authenticate/Authorization headers. However, not all authentication mechanisms rely on these headers (in addition, not all challenge schemes use a realm, which is also currently mandatory to build a Guard, although it can be null). In fact, not all authentication information can be challenged.

There are 3 layers of information that may be used for authentication:

  1. Transport-level (IP address, SSL client-certificates...),
  2. Authentication-headers (although there can be multiple challenge schemes, only one has to be chosen by the client),
  3. Session-cookies.

These 3 layers can be used independently. Some may be used in conjunction with one another.

Although establishing a session via cookies for authentication may seem like it violates the stateless interactions principle of REST, I don't think it does. My interpretation is that the stateless interactions principle should be applied to the application layer, not necessarily the authentication layer. Of course, the use of forms blurs the line between authentication and application data, in particular because they can't really be used with a suitable 401 response. Since Restlet can be embedded in a Servlet, it can then use form/session based authentication.

In addition to this, the Restlet Guard could also make more use of Principal classes. This can be useful for simple cases, where there's a principal that models a user authenticated via a user-name and password, but this could also help model more complex models, for example by mapping the SAML assertions of Shibboleth to principals, thereby providing a basis for role-based authentication. There is a clear need for this with the JAX-RS extension, and principals would also help for making better use of the Java security framework. In terms of implementation design, these instances of Principal should probably be obtained from the authentication providers (that is, the component that actually verify the authentication information).

I'm sure we'll talk more about this on the Restlet mailing lists, but this should provide a sufficient basis for further work on the Guard model. Suggestions and comments welcome!

Update (08/08/08): If you were thinking that this entry wasn't long enough, here are a few additional references. I had forgotten about WSSE authentication, which falls in the "WWW-Authenticate and Authorization headers" category. In addition, here are a few links that may be of interest:

Wednesday, July 25 2007

HTTP-based Notification system

Event notification is something quite often required or at least useful in distributed applications.

After my experiences with WSRF, I tried to design a more "RESTful" approach to notification in our application. I've written a draft of an HTTP-based Notification system and, although I admit this draft is incomplete and could be substantially improved, the main principles I've described there work in our application.

In short, it mimics the SMTP protocol at the application level and models mailboxes and messages as resources. Sending something like this via an HTTP POST to a destination URI works fine:

<Message>
    <From href="uri-of-the-sender" />
    <To href="uri-of-the-recipient" />
    <Content>
        <ns1:test>any XML you want</ns1:test>
    </Content>
</Message>

The idea with this is to be able to support both the pull and the push approach. The pull approach could be implemented using an Atom feed.

It's all well and nice in an application that I can control entirely (clients and servers), but there's obviously a problem when trying to make this inter-operate with other systems. Two specifications thus come to mind:

I've started to look at them, and I'll write more about this in further entries.