Distributed Matter - Blog

To content | To menu | To search

Saturday, March 7 2009

Signing FOAF files: FOAF files as certificates

(Updated 13/03/09: links at the end)

After the discussions we've been having recently on the foaf-protocols list about the various elements of trust that could be expressed in FOAF+SSL, I've decided to investigate a bit further the signature of FOAF files. There is a summary of FOAF+SSL on the W3C Workshop on the Future of Social Networking website. Henry Story's blog also has a number of entries about FOAF+SSL.

In short, a FOAF+SSL is an authentication mechanism that relies on a customised X.509 client certificate which contains the user's public key and the user identity URI (WebID). Because such a certificate may either be self-signed or be signed by a CA you might not know, authenticating this certificate as representing the WebID may be done in two different ways:

  1. getting the FOAF file from this WebID, and verifying that the public key of the certificate matches the public key associated with this WebID in that FOAF file;
  2. searching through the FOAF files of your peers to check if the public key of the certificate matches the public key they know of (for this particular WebID).

Both mechanisms may be used, for different purposes. The second relies on a Web-of-Trust (WoT) built from your FOAF network.

I would like to see if it's possible to provide a mechanism to build such a FOAF-based web-of-trust securely, in a similar way as it's done in PGP. In the PGP model, you can sign someone else's public key (coupled with an identifier), and ask someone else to sign your own. Other people's signatures of your key can be added to your PGP certificate and they can add your signatures of their key to their respective certificates. Thus, PGP users build a collection of certificates from trusted peers, which they'll be able to verify in the future. In addition, you may choose some of these peers to be trusted introducers, in which case they will also accept the certificates that these introducers have signed.

Increasing the number of signatures in a certificate increases the likelihood someone will recognise it. This is how trust is built, from the web made of certificates. FOAF, although not inherently secure, can also be used to build a model of trust in a social network. The problem here is that, to be able to build a web-of-trust from FOAF files, the authenticity of the FOAF files themselves has to be established first. The web is there, but not the signatures, thus not the trust (in the certification sense).

XML Signature

Since RDF may be represented in XML, XML Signature looks like a potential solution, as Dan Brickley mentioned in a comment on this blog.

XML Signature offers three ways of associating a signature with the signed content:

There are two interesting scenarios in this case: signing the root element of the FOAF document or signing some elements within it.

Enveloped or enveloping signatures

It's possible to use XML Signature to put the signature within the signed document (enveloped or enveloping signatures). The problem with this approach is that I could no longer get my FOAF file to validate: the <sig:Signature xmlns:sig="http://www.w3.org/2000/09/xmldsig#" /> element and its content don't mix well with RDF. A workaround is to validate and remove the signature before processing it in the RDF tool.

Detached signatures

The advantage of having the signature detached from the FOAF document is that this document can still be valid RDF. In this case, the XML signature document would have another URI, which could be referred to by the signed document itself, using RDF. For signing the entire document, XML Signature isn't really required in fact. The same objective could be achieved by signing the entire FOAF file using something based on PGP or S/MIME. The main advantage of XML signature over PGP and S/MIME in this case is that it provides a link to the document that has been signed (the URI in the <sig:Reference /> element; an application using this could thus be hypermedia driven.

I'm not sure if it's possible to specify a given content type in the <sig:Reference /> element (there is a Type attribute, but it doesn't seem to have anything to do with the content type). This can be a big problem, since the signature applies to a specific representation. For example, dereferencing Henry's FOAF card can serve a text/rdf+n3 representation (by default) or an application/rdf+xml one, depending on the user agent's Accept header. It would make sense for an RDF-based application that relies on these signatures to try to get the XML-based representations, but XML signature doesn't seem to be limited to signing XML representations when that representation is detached. There could even be multiple XML subtypes for other kinds of documents (say, application/atom+xml and application/xhtml+xml). A word about this is briefly mentioned in the URI section of the XML Signature specification, but I can't see any mechanism to help. A Content-Type attribute would have been nice.

I think I would prefer to be able to keep the signature within the RDF document. This could avoid potential concurrency issues if the FOAF document isn't updated at the same time as its signature. This would also make it more convenient to retrieve the FOAF file and how it has been certified in one operation.

Signing the entire FOAF file

Self-signing

The owner of the FOAF file can sign it: this is equivalent to producing a self-signed certificate. At least, if that person's public key is already known by the reader, it's possible to verify the integrity of the file. It's not bad, but it doesn't make the web-of-trust grow.

Signing by another party

Having your certificate signed by people is the mechanism by which the web-of-trust expands. If they sign the entire file, they will have to do it again every time you make any modification (white-space formatting included). That sounds highly impractical. I'm happy to sign Henry's FOAF file to assert the links between his URI, his name and his public key. However, I don't really want to have to produce another signature every time he adds a new person to his network.

Signing a subgraph and incorporating other's signatures

I think what would be much better would be to be able to sign sub-graphs. For example, in my FOAF file, I should be able to sign this sub-graph, so as to assert the links between Henry's name, URI and public key:

<rdf:Description rdf:ID="henrycert">
	<rdf:type rdf:resource="http://www.w3.org/ns/auth/rsa#RSAPublicKey" />
	<cert:identity>
		<rdf:Description rdf:about="http://bblfish.net/people/henry/card#me">
			<rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Person" />
			<name>Henry Story</name>
		</rdf:Description>
	</cert:identity>
	<rsa:modulus rdf:parseType="Resource">
		<cert:hex>
			862d6e0b8c3252a79d6eb82966f14e495c839ec2d57983ec39bfac79f8a99f887a3ca559cfee438e90f73da143cefc0a849509d8d91e7093a94c1a39863a5bed78a0f0234a372f12dce0a9535b14d92d56827b3791352b5817681ad7949aa7831911d51827a57e46bad9190d73a69ce56ada74a59ddc0df2a7a31247bbd67445
		</cert:hex>
	</rsa:modulus>
	<rsa:public_exponent rdf:parseType="Resource">
		<cert:decimal>65537</cert:decimal>
	</rsa:public_exponent>
</rdf:Description>

I think signing this would be a useful FOAF certificate, similar to the PGP certification, where only the association between public key and ID is signed.

Henry could also incorporate this signed sub-graph that I've signed into this own FOAF file, as a way to add certifications directly to his own public key (again, this is similar to the way PGP works). He could incorporate several copies of this sub-graph, signed by different people.

Specifying what to sign

For my first attempt to sign this element, I used <rdf:Description id="henrycert">. <sig:Reference URI="#henrycert" /> dereferences the fragment using the id attribute (via DOM's getElementById). Unfortunately, RDF validators and tools don't like the presence of this id attribute.

The trick I used relies on an XPath Transform:

<Transform Algorithm="http://www.w3.org/TR/1999/REC-xpath-19991116">
	<XPath xmlns:r="http://www.w3.org/1999/02/22-rdf-syntax-ns#">ancestor-or-self::*[@r:ID='henrycert']</XPath>
</Transform>

The is another problem here. If everyone who signs this element for Henry uses rdf:ID="henrycert" there are going to be conflicts when he pastes these certifications into his own collection. It seems better to use a UUID instead of henrycert, to avoid clashes.

Identifying the key used for the certification

In addition, because my public key is in the signature, in <sig:KeyInfo />, it's useful for whoever use it to know that it's mine. Instead of using the FOAF+SSL client certificate and a <sig:X509Data /> element, I've simply put my URI in the <sig:KeyName /> element. That's at least sufficient for this experiment.

Using these certifications

Once such a has been verified, in the XML Signature sense, it would be good to retain this fact in the RDF graph. I'm not sure how to express this. In addition, we'd also need to verify that the public key really belongs to its identifier, in the same way as it's done in FOAF+SSL.

I'd still have preferred to be able to keep the signatures within the same RDF document, but this doesn't seem to be possible (if we want that document to be processable by usual tools). Perhaps the easiest solution would be to create separate FOAF files containing just what needs to be signed. It might also be possible to have an RDF Signature specification similar to the XML Signature specification, including some canonicalisation which would be required, but this seems like a big job, and an unfortunate duplication of effort (seeing what has been done for XML Signature).

Update - 13/03/2009

Toby Inkster pointed out the work done by Jeremy Carroll on this topic. If you've found this blog entry interesting, you should read this work too.

In addition, if you're interested in XML Signature in Java, here are a couple of links you'll certainly find interesting:

Tuesday, October 14 2008

Comparing Web of Trust and Hierarchical PKI for FOAF+SSL

I've been talking with Henry Story about SSL and security-related issues, following the publication of his blog entries: "RDFAuth: sketch of a buzzword compliant authentication protocol" and "FOAF & SSL: creating a global decentralised authentication protocol". Our discussion more or less started when I was making the first release of jSSLutils, and I'm also interested in authentication mechanisms in general. Henry got a few of us together to talk about this.

The idea behind Henry's proposal is to provide a secure authentication mechanism based on FOAF (Friend-of-a-friend) files and public key cryptography. It follows some of Dan Brickley's work; there's also a prototype server implementation by Pipian. One of the aim is to replace hierarchical Public Key Infrastructures (PKI), which are centralised, with something more flexible. Indeed, hierarchical PKIs rely on a Certificate Authority (CA) to assert the identity of a user or a server. This is typically how most people use HTTPS web-servers: you trust your bank's website because its certificate can be verified against a CA certificate which is trusted by your browser. What is perhaps less-widely known is the use of client-side certificate authentication. In this case, not only the server presents a certificate to the user, but the user also presents a certificate (for which he/she has a private key) to the server. If the server trusts the CA certificate that issued the certificate of the user, then it's a valid form of authentication. Again, this requires the user to have been delivered a certificate signed by a CA that the server trusts.

The most difficult aspect of establishing a Certificate Authority isn't the technical one: the main difficulty is in the legal and administrative process whereby the authority operates and delivers certificates. There are a number of commercial CAs (Verisign, Thawte, ...) which most browsers trust by default: there certificates are already in the browser (or underlying software) when you obtain it. The price required to be delivered a certificate vary depending on various attributes that can be in the certificate (for example code-signing), on the CA, and on how far they've actually been to check that the users are who they say they are. Some institutions also provide this service for free. For example, the UK e-Science CA can provide certificates to more or less all UK academics; despite being free for the user, the procedures are relatively thorough, since you have to turn up in person to a local approved CA representative, who will check your passport or similar proof of ID. This is not something that all commercial CAs do. It would be tempting to assume that the more someone has paid a CA for a certificate, the more certain you can be of their identity; this is not necessarily the case (although some people have a vested interest in making you think so).

A FOAF+SSL authentication mechanism would make it possible to avoid depending on a small number of CAs, and instead relies on a FOAF network to assert identity. This works along the lines of a Web-of-Trust (WoT) model, like OpenPGP, which can provide certain advantages (and possibly its share of problems).

Making this work in practice

You can find more details on Henry's blog. We want to use this to provide access control on the (semantic) web, using SSL as an underlying authentication mechanism. (As a first step, it's easier to focus on client-authentication using this method; we'll still use hierarchical CAs to assert the identity of a server.)

The way SSL client-side authentication works in most servers is by configuring a set of trusted CA certificates (the trust store in Java); this follows the X.509 PKI model of trust. If the client presents a certificate issued by a CA the server trusts, then the server authenticates the client. In most systems, the validation of the client certificate is done by the SSL library, which will reject it and most likely close the connection if cannot find a chain to a trusted CA certificate, before any application data is exchanged. Luckily, the way these underlying SSL libraries perform this verification can be customised.

What we've done (and there's some sample code in the Sommer project code repository) is "converting" an OpenPGP key-pair (which is in fact a form of certificate) into an X.509 certificate that can be used directly. It's just a self-signed certificate based on the OpenPGP key material: it contains the OpenPGP public key and is signed by the corresponding private key. If the client presents this certificate to a server during the SSL-handshake, this proves to the server that the client has the corresponding private key. Trusting what that certificates represent may then be handled differently, by customising the way trust is managed within the server. Of course, the X.509 certificate generated this way can also contain other pieces of information, including Subject Distinguished Name (Subject DN) and time-validity. We're still experimenting with this, and none of this is standardised, but the main extension we've been using is the "subject alternative name", which we've set to be the URI identifying the user; this fits the FOAF model quite well. We then tell the underlying SSL library to trust any client certificate, letting the layer above perform the trust verification (this can for example be done in Java using the TrustAllClientsWrappingTrustManager of jSSLutils).

Another approach, which has recently been published as an experimental RFC 5081 consists of extending TLS to support not only X.509 certificates but also PGP certificates. I think our approach is easier at the moment, since the approach in RFC 5081 would require changes in the existing SSL/TLS libraries (changes which I haven't seen, and which I doubt will happen in the short term for rather well-established projects such as OpenSSL or the JDK). The problem with our approach is that we move the evaluation of trust up to the layer that uses SSL, which means it has to be handled explicitly at the moment (of course, there's nothing preventing us from providing a library or similar module for this). I also reckon that the web-of-trust model can be a bit harder to evaluate in some cases, and this might require more interaction and configuration than the X.509 model had (just providing CA certificates and perhaps CRLs). It's anyway a more convenient way to experiment with this, rather than having to wait for RFC 5081-compliant implementations.

How to model trust

The hierarchical PKI model (X.509) is fairly simple to evaluate. The network of trust can be modelled as a tree, the root of which is the CA certificate; the chain is built between the leaf (the user certificate) to the root of the tree. This is also because CAs come with policies that specify which certificates intermediate CAs are allowed to sign so that the chain is valid. Although it has the inconvenience of being central (and thus concentrates power in the CA), it is relatively straightforward to understand, implement and evaluate.

If we want to use a Web-of-Trust model, we need to provide a new way to evaluate trust, and to model this in the FOAF extensions. (At this stage, I should point out that I know very little about FOAF/RDF/ontologies, but I'm planning to learn about all that very soon.)

There's a good description of how this can be done in Walking the Web of Trust, by Germano Caronni.

Another problem is that, in the CA model, a root CA or any intermediate in the chain is something for which:

  • you trust its identity, and
  • you trust its ability to perform the necessary steps to check and assert someone else's identity.

There are usually legal documents and policies in certificate authorities that define these agreements.

One must be quite careful in a Web-of-Trust model to make sure that this distinction is integrated in the function that evaluates trust. Trusting someone's identity and trusting someone's actions are rather distinct things. On the one hand, this can bring more complexity; on the other hand, this can bring more power to the model. For example, we could integrate more information than just asserting someone's friendship in a FOAF file, but add domains of expertise for which a server could choose to trust a user only if the model trusts them enough to be qualified to perform the task.

How to revoke trust

Problems with keys happen; we need to anticipate them. I think there are two main reasons for using Certificate Revocation Lists (CRLs) in the PKI world:

  1. the private key of a user has been lost or compromised,
  2. the user has done something bad and is no longer to be trusted.

The first problem is a matter of propagating the revocation to whoever has the user in their FOAF file. It's likely that he/she had them in his/her FOAF file, so it seems feasible. Similar things could happen if the keys of a CA were compromised or otherwise lost. Such a scenario would cause disruption, but it could be contained.

The second problem is really an authorisation issue, in fact, in a hierarchical PKI. The identity of the user could still be valid, thus authentication would work well, you would just want to deny authorisation. In the case of Web-of-Trust, this can be a bit more tricky, since you may have to re-evaluate the assertions you've made about his friends. Put it this way: is the friend of your former friend who betrayed you still your friend after that? I'm not sure there's a right answer to this, but I doubt this type of case can be handled automatically.

It's probably not a bad thing to re-assess the content of the FOAF file regularly, as Henry suggested, but this will demand actions from the user, and thus appropriate user education. In my experience, the problems with security are not as much technical as they're a matter of educating users and providing them with something easy enough to use. Otherwise, they might just not use your system securely or won't use it at all.


I'm not sure there is a right answer between Web-of-Trust or hierarchical PKI. It really depends on your working environment and on how easy it is to set up a PKI there. The advantages of the Web-of-Trust combined with FOAF are appealing: more flexibility, less dependence on a central system. Perhaps the two can be used together for two different usages: after all, you don't necessarily introduce your friends by showing their passport. This is only the beginning. I think the challenge is now to provide a suitable formula to evaluate trust, and to explain it to the users accordingly.

Monday, June 9 2008

HTTP authentication mechanisms (and how they could work in Restlet)

Updated 08/08/08

I'm currently addressing how access control is done in the project I'm working on. We're using Restlet, so I've been looking into its authentication and authorisation mechanisms. There were a few issues with the current API for Guards, as I wanted to use mechanisms that are not currently supported. Thus, I started a discussion on the Restlet mailing list by making a few comments and suggestions about the Guard class. The Restlet community is really dynamic and changes tend to be integrated rapidly into the source code, which is really pleasant. However, we all seem to agree that this particular problem deserves more substantial consideration than just a few quick patches. Here are a few thoughts I've had on the subject...

Guarding access to a resource involves two steps:

  • authentication, which consists of ensuring the user is who he/she claims to be and that the credentials presented are authentic (for example, you can be authenticated because you can prove you know your password), and
  • authorisation, which consists of making a decision as to whether the authenticated user may or may not perform an action.

I'm focussing first on authentication, since this is really what influences the external interfaces of the system: what happens between the client and the server. In contrast, authorisation happens after the client has been authenticated and can be done by a third party. This being said, some authentication mechanism blur the line with authorisation, especially when some authorisations are granted or delegated by a third-party.

In the following, I'm trying to categorise the various pieces of information and mechanisms that can be used for authentication, in particular, authentication at transport level and authentication at HTTP-level. This is very HTTP-centric, even if Restlet can be used with other protocols. If you know of other mechanisms or spot something wrong, please let me know.


Authentication at transport-level

This category of authentication mechanism is not strictly related to HTTP, but HTTP sits on top of this layer and web applications can, in many cases, make use of information from the underlying transport mechanism.

SSL and TLS authentication

Secure Socket Layer (SSL) and Transport Layer Security (TLS) provide mechanisms for peer authentication at transport-level. In almost all cases, unless an anonymous cipher is used, the server authenticates itself to the client by presenting an X.509 certificate corresponding to the host name, which the client may choose to trust. (Anonymous cipher suites, which do not require the server to present a certificate, are prone to man-in-the-middle attacks.)

X.509 client-certificates

The client can present a certificate (or chain thereof) to the server. This procedure is initiated by the server, which may also be configured to accept the absence of a certificate even if it has been requested. In short, there are three ways to configure the server for client-side authentication (here, following the Tomcat configuration keywords):

  • none: no certificate request, so client-side authentication is disabled,
  • want: a certificate is requested, but the server will still establish the connection if the certificate response is empty, so this corresponds to setWantClientAuth in Java,
  • need: a certificate is requested and the server will require a valid, non-empty certificate response to establish the connection, so this corresponds to setNeedClientAuth in Java.

These options configure the SSL handshake, which takes place before any HTTP data is exchanged. In practice, if SSL client authentication is to be used as one of several possible authentication methods, or if only some resources on the server are protected by a mechanism that requires SSL client authentication, the preferred option should be to want a client-certificate. Indeed, if the server needs a certificate, failing to provide one implies that the connection will not even be established (which means that HTTP cannot be used to send an "access forbidden" error message of any sort): this is not very "polite".

Kerberos

Kerberos is a network authentication protocol which is secure. It allows two parties to authenticate each other by using a third party they both trust: the KDC (key distribution centre). It's totally independent of SSL. However, there are some Kerberos-based cipher suites, which are for example available in Java 6, so they could also provide Kerberos authentication at transport-level.

I have done a few quick tests in Java: SSLSession.getPeerPrincipal() does return a instance of KerberosPrincipal, the name of which is the Kerberos principal of the client, as expected. The client that I've tried was written in Java and using more or less the same JAAS settings as the server. The only other clients that I've tried were Firefox, Opera and Safari; unfortunately, they do not support these cipher suites, which makes it unrealistic for deploying a system that would be used from a browser. (This type of Kerberos-based authentication is not related to SPNEGO, which is better supported by most browsers and also works from Java clients.)

FOAF & SSL

A few weeks ago, Henry Story talked about his idea of secured FOAF on the Restlet mailing list. In short, Henry was interested in using a web-of-trust model using SSL and I've made a few suggestions so as to use OpenPGP keys in X.509 certificates. The implementation of this is available in the Sommer project and this particular feature uses jSSLutils. This is a very interesting project and I think FOAF&SSL based authentication could provide new authentication models, perhaps with various degrees of trusts that would depend on the way peers are related in the web of trust.

Client IP address and domain name

Although this does not strictly-speaking authenticate the client, but rather the machine from which the requests come, IP addresses and their reverse DNS names could be considered as authentication information which may then be used to make an authorisation decision. This is frequently used to authenticate someone as an intranet user.


Authentication at HTTP-level

The following authentication mechanisms make use of the HTTP protocol.

WWW-Authenticate and Authorization headers

HTTP provides a WWW-Authenticate response header which may be used to challenge the client to provide suitable authentication information, via the Authorization request header.

When access is not authorised, a 401 Unauthorized response status code is returned. There may be multiple WWW-Authenticate challenges in a single response; it is then up to the client to use one of them and provide its response in the Authorization header. This may be confusing, but most of the time, the Authorization header contains authentication (not authorisation) information.

Basic

HTTP Basic authentication is probably the simplest username/password authentication mechanism in HTTP. It relies on a realm, which is a name usually displayed by the browser when the user is prompted for his/her password. It is probably the simplest to implement, since the client then only has to provide the credentials as as string like username:password in clear, encoded in base 64. Since the server obtains the password in clear, this password can be relayed to a third-party authentication provider to be verified, such as an LDAP server. Other authentication providers include encrypted passwords in files (htpasswd in Apache) or in databases. Of course, this would also work with authentication providers that disclose password. HTTP Basic authentication can also be used pre-emptively, that is, when making the first request to the server, without being challenged with a "401" response.

Digest

HTTP Digest is an improvement over HTTP Basic because the password is not transmitted in clear between the client and the server. The downside of this mechanism is that it requires the server to be able to know the secret password. This is often impractical in environments where the password is really only known by the user (any mechanism that obfuscates the password) or where the authentication cannot reveal the password even if it has access to it. (It could be risky to let an LDAP server give out passwords.)

I personally tend to prefer Basic over SSL rather than Digest for two reasons:

  1. I use authentication providers that do not give access to the password in clear (LDAP or non plain-text htaccess).
  2. I find it hard as a user to feel safe entering my password in a Digest dialog box... which looks just the same as the box you get with Basic authentication; at least, with HTTPS, I'm warned when I switch to an un-enciphered page.

Negotiate (SPNEGO)

SPNEGO (Negotiate scheme) is an authentication mechanism that negotiates the use of an underlying GSS-API. When used in conjunction with HTTP, it uses the Negotiate scheme. It is mostly used to use NTLM or Kerberos (via GSS), but I think it could by used with other GSS mechanisms if required.

I've experimented with this for Restlet (on the server-side) and managed to get it to work from Firefox, Safari, Internet Explorer, and Konqueror against a Linux-hosted MIT Kerberos KDC. Active Directory can also use Kerberos.

Despite the security of Kerberos itself, it may be wiser to secure the exchange of GSS tokens in the HTTP headers, in particular using a GSS mechanism that supports integrity protection. (Admittedly, this is something I need to check.)

Kerberos

Kerberos can be used directly as an authentication scheme via the Apache Kerberos module (which may also support SPNEGO/Negotiate). I'm not sure if it's really maintained. The corresponding Mozilla plugin seems no longer maintained. Since Kerberos works fine via SPNEGO, I'm not sure why one would use it nowadays.

NTLM

There's an NTLM scheme too. Again, I suppose it should be deprecated nowadays, in favour of Negotiate.

OAuth

OAuth is a rather new HTTP authentication/authorisation scheme. Its purpose is to delegate authorisation. There already is an implementation of an OAuth Guard in Restlet.

Cookies and authentication sessions

Forms

Form-based authentication is probably the most used method of authentication on the web. It tends to make use of a cookie to maintain a session (or some session identifier in the URL). Its main advantage is that it looks prettier, especially when it matters to have a corporate visual identity. It has similar security problems as HTTP Basic, and it's thus better when used over SSL.

In terms of HTTP, form-based authentication usually challenges the client with two response codes:

  • 200 OK, you get the page, but there is an explicit message on the page which says you must log on and give you a link. It's a bit of a shame, since in principle, it could also return a 401 Unauthorized status code in the header and still produce the same explicit page which invites the user to log on.
  • 302 Moved, you get redirected to the login page. Again, a 401 Unauthorized status code could be better, especially for non-browser clients.

This being said, the HTTP specification clearly states that a WWW-Authenticate header must be present when returning a 401 response code. Perhaps there should just be a WWW-Authenticate: Form http://uri/of/the/logon/form scheme.

Shibboleth, Google Single-Sign On and OpenID

To some extent, mechanisms that delegate authentication to a third party such as Shibboleth, Google Single Sign-On Service and OpenID (Kerberos is not it this category because it's supported by SPNEGO headers directly) can be considered as variants of the form-based authentication, where a cookie is established to maintain the authenticated state with subsequent requests. Maintaining a session via a cookie isn't strictly necessary (especially for OpenID), but it seems that it's the way it's usually done, probably by lack of a corresponding WWW-Authenticate scheme.

These authentication mechanisms are mainly designed for browser-based access. In order to get the Shibboleth session cookie, the first HTTP request needs to be a GET, this wouldn't work with other actions. (I believe Shibboleth 2.0 tries to address this, but I haven't looked into it.)

Shibboleth is a mechanism that involves more interactions than a plain form and automatic redirections. There are also connections behind the scenes between servers, but here is what it looks like from the browser point of view:

  1. C->S - GET. The browsers attempts to get http://protected.site/page.
  2. S->C - The server returns a 302 redirection to a "Where are you from?" (WAYF) page, this URI includes the URI of the protected page in a parameter.
  3. C->S - GET http://where.are.you.from/ (automatic after redirection).
  4. S->C - The WAYF page offers a list of identity providers (IdP). The user clicks on an IdP link (usually, that of his/her institution) and is directed to that URI. Again, that URI comprises the URI of the protected page.
  5. C->S - GET http://identity.provider/. The page of the IdP authenticate the user via another means (HTTP Basic, X.509 certificates, ...): this is where the IdP authenticates the user.
  6. S->C - Upon successful authentication, the IdP gives the browser a web pages with a form that contains the SAMLResponse. This form is instantly submitted automatically via POST (because there's a <body onload="document.forms[0].submit()"> on that page).
  7. C->S - POST to http://protected.site/Shibboleth.processor the content of SAMLResponse.
  8. S->C - The server sets a Shibboleth session cookies and sends a 302 redirection to the original protected page.
  9. C->S - GET http://protected.site/page. The Shibboleth authentication has taken place. The Shibboleth-protecting module also inserts automatically a header for processing by the web application it hosts. That header is a set of SAML assertions for authentication and authorisation.
  10. S->C - You're in (depending on the SAML assertions).

This would really look better without the automatic POST (which relies on JavaScript). I hope that, when browser designers realise that these tricks are what causes the CSRF attacks to work, being able to have JavaScript to post a form upon loading the page (to a distinct website) will be more restricted. Unfortunately, this will certainly be a problem for Shibboleth. (Shibboleth could be deployed with a GET instead, the problem is that the SAMLResponse is about 8KB, which is a bit long as a URI parameter.)

This is in principle very similar to the Google Single Sign-On Service. The POST issue could in principle exist with Google SSO.

OpenID also falls in this category, except that the authentication assertion is only about the URI identifier, there's no extra SAML assertions (yet).


What this means in the context of Restlet

The current Guard class in Restlet is currently built using a ChallengeScheme. This is well suited for the use of WWW-Authenticate/Authorization headers. However, not all authentication mechanisms rely on these headers (in addition, not all challenge schemes use a realm, which is also currently mandatory to build a Guard, although it can be null). In fact, not all authentication information can be challenged.

There are 3 layers of information that may be used for authentication:

  1. Transport-level (IP address, SSL client-certificates...),
  2. Authentication-headers (although there can be multiple challenge schemes, only one has to be chosen by the client),
  3. Session-cookies.

These 3 layers can be used independently. Some may be used in conjunction with one another.

Although establishing a session via cookies for authentication may seem like it violates the stateless interactions principle of REST, I don't think it does. My interpretation is that the stateless interactions principle should be applied to the application layer, not necessarily the authentication layer. Of course, the use of forms blurs the line between authentication and application data, in particular because they can't really be used with a suitable 401 response. Since Restlet can be embedded in a Servlet, it can then use form/session based authentication.

In addition to this, the Restlet Guard could also make more use of Principal classes. This can be useful for simple cases, where there's a principal that models a user authenticated via a user-name and password, but this could also help model more complex models, for example by mapping the SAML assertions of Shibboleth to principals, thereby providing a basis for role-based authentication. There is a clear need for this with the JAX-RS extension, and principals would also help for making better use of the Java security framework. In terms of implementation design, these instances of Principal should probably be obtained from the authentication providers (that is, the component that actually verify the authentication information).

I'm sure we'll talk more about this on the Restlet mailing lists, but this should provide a sufficient basis for further work on the Guard model. Suggestions and comments welcome!

Update (08/08/08): If you were thinking that this entry wasn't long enough, here are a few additional references. I had forgotten about WSSE authentication, which falls in the "WWW-Authenticate and Authorization headers" category. In addition, here are a few links that may be of interest:

Thursday, May 22 2008

jSSLutils: Customisable configuration of SSL in Java

I recently released jSSLutils: a library to help configure SSL in Java.

Background

In my current project, I have been designing and implementing a system for managing data and allow these data to be shared between the relevant people. Securing access to this data is a strong requirement of this project. This system is implemented using Restlet (only the server-side at the moment), in Java.

In our domain, we tend to use SSL client-side authentication, whereby not only the server presents a certificate to the client (like most secure consumer-oriented websites do) but the client also sends its certificate to the server for being authenticated. Some of these certificates may also be proxy-certificates, often used as a single sign-on (SSO) mechanism in grids based on GSI. We also have a public-key infrastructure (PKI) set up with a certificate authority (CA) for our community.

I've experimented with more advanced configurations of SSL, mainly dealing with proxy-certificates and Certificate Revocation Lists (CRLs), which are not always supported by Java applications out of the box. I had managed to get Jetty to accept proxy-certificates; this is possible by modifying the SSL connectors. Later on, I tried to do the same thing with Restlet. Restlet can use several SSL connectors (based on Jetty, Grizzly, or others). What is common to all these SSL configurations is that they require the TrustManagers to be adapted when building the SSLContext used to create the SSLSockets. This was a good opportunity to package my code and make it usable from several projects.

jSSLutils

I have thus created this relatively small library: jSSLutils. The initial aims were:

  • to provide a consistent way to configure SSL-related parameters in Java applications,
  • to be able to customise the way keys and trust were managed (in particular grid proxy-certificates),
  • to be able to configure CRLs.

There is some initial documentation on the jSSLutils website. There are also examples in the form of jUnit tests in the source code. They even come with test keys, certificates and CRL so that you can try them out.

As of this week, version 0.3 is in the central Maven repository, which should make it easier to use.

FOAF & SSL

Coincidentally, Henry Story was also looking for customisation features in SSL at the same time on the Restlet mailing-list. I was able to help him a bit for his work on FOAF & SSL: a global decentralised authentication protocol. You can find his work in the Som(m)er Address Book project. This looks like a very interesting approach for authentication, but I haven't found the time to get more involved in this project.

Extras

I've also looked at using PKCS#11 tokens and the Apple Keychain (on Mac OS X) from Java, to store and to use the keys and certificates. While investigating this, I found out that some applications (most of those I've looked into) were unable to support these features, at least not easily. Fortunately, the projects I've looked into are open source: I was therefore able to add the features I wanted and contribute them back to their communities. For example, I submitted a patch for Tomcat and a patch for Jetty, to make it possible to use the Apple Keychain or a different Java security provider (useful for static configuration of PKCS#11).

The code for loading a KeyStore in many Java applications seems to assume that it's going to be loaded from a file, which isn't the case for PKCS#11 tokens or the KeychainStore. jSSLutils also provides a jsslutils.keystores.KeyStoreLoader, which should help handling these cases as normal cases.

Feedback

Comments and suggestions are welcome, please get in touch!

Friday, July 27 2007

A look at WS-Notification

I'm having a look at the WS-Notification (WS-BaseNotification 1.3 OASIS Standard). I usually find reading some of these specs laborious (and little rewarding), but I'm trying to put some good will towards Web Services (in the WS-* specifications sense). However, shortly after beginning, I found a few caveats that would probably make it difficult for all compliant implementations to interoperate.

Here is an excerpt of the introduction to Section 3 (the NotificationConsumer interface):

WS-BaseNotification allows a NotificationConsumer to receive a Notification in one of two forms:
1. The NotificationConsumer MAY simply receive the “raw” Notification (i.e. the application-specific content).
2. The NotificationConsumer MAY receive the Notification data as a Notify message as described below.
[...]
When a Subscriber sends a Subscribe request message, it indicates which form of Notification is required (the raw Notification, or the Notify Message). The NotificationProducer MUST observe this component of the Subscription and use the form that has been requested, if it is able. If it does not support the form requested, it MUST fault.

At a first glance, a NotificationConsumer (i.e. the recipient of the notification) can be compliant with the WS-Notification standard so long as it can receive the message, whether-or-not it complies with the format described in the following pages. There are subsequent mentions of the raw format, but its use seems to imply the use of SOAP (in a context that uses MAYs and SHOULDs).

Later on, in Section 4.2 (NotificationProducer/Subscribe):

The NotificationProducer should specify via WSDL, policy assertions, meta-data or by some other means, the information it expects to be present in a ConsumerReference. If a ConsumerReference does not contain sufficient information, the NotificationProducer MAY choose to fault or it MAY choose to use out of band mechanisms to obtain the required information.

In addition, WS-Notification relies on WSRF (which more or less re-invents HTTP-based resources, but that's another story). The WSRF specification defines a set of accessors to get and set resource properties, but leaves the door open regarding how these should behave, especially when setting multiple properties in one request. Fair enough, HTTP leaves this responsibility to the applications that use it too. Interestingly, though, WS-Notification doesn't say much either about what should happen when using SetResourceProperties.

To sum this up, I think the core concepts of NotificationConsumer, NotificationProducer, etc. are sound, but the two excerpts produced above make me doubt it can actually achieve some sort of interoperability. It almost sounds like "do what you want so long as you use SOAP and WS-Addressing". I'm yet to be convinced that these two add any value for achieving the goals of Web Services (SOAP for security, maybe?).

- page 1 of 2