Discussion on my RDF Hypermedia Proposal

I have had some offline discussions about my Read Write RDF Hypermedia proposal (please let me know if you’d like to be named), and there are some things that I’d like to elaborate on.

There is clearly a need to contrast my proposal with other technologies out there, and make it clearer where this makes sense.

Initially, I developed this with the ideal of making the RDF control statements look somewhat like natural language sentences, that is, if you as a client is authorized to delete a resource description, the RDF should tell you that in terms that are likely understood by someone who has never seen RDF before. Developers might just look at what they get from a server and start to work with it. After discussing it, I think it might have another advantage: It seems to me that we will have a lot of different protocols on different levels in the stack, and that this protocol heterogeneity could be addressed in part by moving controls into the body of the message. Lets discuss that first.

Protocol Heterogeneity

HTTP has served us well, and will continue to do so. Nevertheless, it seems to me that with “Internet of Things” (IoT), we will develop a “last mile” problem,  in which we cannot or do not want to control what application layer protocol to use. I haven’t done a lot of work in this space, but it seems very diverse. Obviously, if you can use a Wifi protocol, HTTP will available to you. I did some work with an Arduino (which is a microcontroller with very little RAM), where I found that the best choice to interact with it for my application to add an Ethernet shield and use HTTP. In other cases, it probably wouldn’t be. Some protocols are maintaining layered model, and some, like Zigbee, has gained an IP layer. Others, like DASH7, touches every layer in the OSI stack. In the latter case, it is mostly easy to map HTTP methods into operations on DASH7 devices, but the point here is that there is a whole zoo of protocols, some of which would need a mapping, some of which have an IP layer, but where you do not want to be tied to an application protocol for some reason. That is not to say that HTTP will not be the application protocol used on the open Web, but that there are endpoints that should be free to choose their protocol for interacting with their internals.

If the semantics of control operations sits in HTTP the translation may become unnecessarily complex, and so feel restrictive to implementors. That’s one of the reasons I would prefer to  have controls in the message body.

But it also implies that the controls should be as precise as possible. For example, I proposed

</test/08/data> hm:canBe hm:mergedInto .

and not that we encode POST as an operation. POST is not precise enough, but we should define the in vocabulary that for the HTTP protocol, hm:mergedInto would be implemented with a POST.

Finally, it should be noted that such controls should be defined for protocols that use very different operations. I have for example proposed similar controls for video:

  a dcmit:MovingImage ;
  hm:can hm:play, hm:pause, hm:stop .

I also proposed more detailed more application-specific controls, like

  a dcmit:MovingImage ;
  hm:canBe hma:votedFor ;
  rev:hasReview </user/foo/vote/1>, </user/bar/vote/56> .

where the definition of hma:votedFor might be a fairly complex, possibly defined in terms of other controls, I haven’t thought that far yet.

Contrasting With Other Efforts

There are several other vocabularies and protocols that are in the same space. None, as far as I know, take the same starting point: That controls should look and feel like natural language sentences. I think that is the most important point, that is the key enabler of making “View Source” development feasible for newcomers.

There are, however, in-band hypermedia controls to define allowed operations in RDF in the Hydra specification. However, Hydra doesn’t read like natural sentences, its appeal is more towards those who already know RDF. Moreover, it isn’t inclined to define precise controls, like hm:mergedInto, it allows you to formulate which operations to do, and then hydra:method is explicitly a HTTP method. As argued above, this may create a gap on the last mile, as the semantics becomes unclear further down the stack and it would require more work mapping between various protocols.

Then, there’s the Web Access Control (WAC) spec. On the surface, my proposal seems like a replacement for that, but that’s not intended, to the contrary, I see these as complimentary. In fact, when I first started my implementation, the first thing I did was to use the predecessor of this spec as well as WebID+TLS to create the authentication and authorization framework. I later removed that code to focus on the hypermedia aspects, but I still think WAC is the right way to specify the authorization rules. The WAC should be exposed so that clients that prefer to gain an instant overview of the application they are interacting with, but I think that many users will find it much easier to understand a nearly-natural language sentence that tells them exactly what they can do to the resource they are presently interacting with.

I guess I should have taken the complimentary perspective with Linked Data Platform as well. So far, I’ve been of the opinion that LDP should be superseded by hypermedia. My main point of criticism has been that LDP doesn’t have the right primitives, i.e. LDP’s primitives need to be understood in terms of HTTP, but that means a newcomer has to have incentives to read up. I think they need to have the “hey, I can do this”-experience first. The message and tools have to provide that. Then comes the argument from above that the emergent IoT-induced protocol heterogeneity problem could require us to formulate controls in message bodies.

That said, it is also pretty clear that my hypermedia proposal could be formulated to only add some RDF to resource descriptions, and thereby augment rather than replace LDP. So, I hereby apologize for my attacks on LDP, lets go complimentary.

Is the Controls Resource Needed?

In my first post, I argued that in the case where we expose a resource for reads for unauthenticated clients, but require authentication and authorization for writes, we need a separate resource to challenge the client. I admit that this is suboptimal, but I don’t see a lot of good alternatives. One alternative that was proposed to me is to add hypermedia controls to the response to an unauthenticated client as it is not the controls that require auth* but the right to use them. I feel that would be more confusing, as even an authenticated user may not be authorized to use certain controls, and so, a client may authenticate and then see certain controls disappear because they are not authorized to use them.

At this point, I think that implementation experience is needed to find the best approach, and importantly, study how inexperienced but enthusiastic implementors interact with resources.

Read Write RDF Hypermedia

The beauty of “View source”

When I first came to the Web in 1994, one of the first things I did was to “View source”. Although I had been programming since I was a kid, I didn’t come into it with “proper training”. As I looked at other people’s HTML, I felt the thrill of thinking “I understand this! I can actually make things on this Web thing!” Then, I went on to do that. Now, I also remember the introduction of CSS. I was skeptical. Should I learn another thing, when I had something that worked? Eventually, I was won over by the fact that could share styles between all my documents, that made life so much easier, I had accumulated a bunch of them at that point.

I was won over to RDF in 1998 too, but since then, I have never seen what I first saw in HTML, the “View source” and we’re ready to go. In fact, when I look at modern Web pages, I don’t see it with JS and HTML either. Something seems to have been lost. It doesn’t have to be that way. RDF triples can be seen as a simple sentence, it can be read and understood as easily as HTML was back in the day, if we just allow systems to do it. That’s what I’ve set out to do.


I’ve been thinking about read-write RDF hypermedia for a long time, and I started implementation experimentation more than 5 years ago, but the solution I had in mind at the time where I wrote the above presentation didn’t work out. Meanwhile, I focused on other things. The backstory is a long one, I’ll save that for later, as I think I’m onto something now and I’d like to share that. I also have to admit that I haven’t been following what others have done in this space, which may or may not be a good thing.

Hypermedia, the idea that everything you need in an interaction to drive an application can be found in the messages that are passed, is an extremely good fit with my “View source” RDF. Now, the problem was to make it work with Linked Data, common HTTP gotchas, and yet make it protocol independent.

What I present now is what I think is the essence of the interaction. Writes necessarily has to be backed by authentication and authorization (and that’s what I started coding, then I found I should focus on the messaging first). For now, I have a pre-alpha with a hardcoded Basic Auth username and password, and it only does the simplest Linked Data. However, I think it should be possible to bring it up to parity with Linked Data Platform. That would be the goal anyway. The pre-alpha is up and running on http://rwhyp.kjernsmo.net/

One way to do Linked Data is to have a URI for a thing, which may be for example a person. A person isn’t a document, so it has a different URI from the data about it. For example, http://rwhyp.kjernsmo.net/test/08 could be my URI, and if you click it in a browser, you would get a flimsily formatted page with some data about me. With a different user agent like curl, you would have a 303 redirect to http://rwhyp.kjernsmo.net/test/08/data and that’s where the fun starts. This is what is intended to be where developers go.

If you do that, you will see stuff like (the prefixes and base http://rwhyp.kjernsmo.net are abbreviated for readability):

</test/08> a foaf:Person ;
 foaf:name "Kjetil Kjernsmo" ;
 foaf:mbox <mailto:kjetil@kjernsmo.net> .
</test/08/data> hm:toEditGoTo </test/08/controls> ;
 void:inDataset <http://rwhyp.kjernsmo.net/#dataset-test> .

In there, you find the same information about, augmented with hm:toEditGoTo standing between the URL of the data, and something that looks similar, but has a controls at the end. The idea is that it should be fairly obvious that if you want to edit the resource, you should go there and see what it says. For machines, the definition of the hm:toEditGoTo predicate should also be clear. Note that the choices of having data and controls in my URIs are not insignificant, you can have anything, as long as you use that predicate to link them.

If you do, you will be challenged to provide username and password. Try it out with testuser and sikrit. Then, you’ll see something like this:

</test/08/controls> hm:for </test/08/data> ;
 a hm:AffordancesDocument ;
 rdfs:comment "This document describes what you can do in terms of write operations on http://rwhyp.kjernsmo.net/test/08/data"@en .
</test/08/data> hm:canBe hm:replaced, hm:deleted, hm:mergedInto .

See, the last sentence tells you that this data document can be replaced and deleted, and merged into. Now, the idea is to define what these operations in the context of a certain protocol with an RDF vocabulary. I haven’t done that yet, but perhaps we can guess that in HTTP replaced means using PUT, deleted means using DELETE and to merge into means POST?

Then, it is the idea that these operations can be described in the message itself too. For example, we can say that if you use HTTP for hm:mergedInto, you have to use POST, and we can reference the spec that contains the details of the actual merge operation, like this:

hm:mergedInto hm:httpMethod "POST" ;
 rdfs:comment "Perform an RDF merge of payload into resource"@en ;
 rdfs:seeAlso [ 
   rdfs:isDefinedBy <http://www.w3.org/TR/rdf-mt/#graphdefs> ;
   rdfs:label "RDF Merge" ] .

It is an open question how much of this should go into what messages.

At this point, I use RESTClient but there are many similar tools that can be used, just use the same credentials, set the Content-Type to text/turtle, and POST something like

<http://rwhyp.kjernsmo.net/test/08> foaf:nick "KKjernsmo" .

to http://rwhyp.kjernsmo.net/test/08/data then get it another time, and see it has been added.

So, it is fairly straightforward. It is basically the way it has always been done. So, what’s new? The new thing is that a beginner has had their hand held through the process, and once we have the vocabulary that tells man and machine alike how to use a specific protocol, they can both perform their tasks based on very little prior knowledge.

At some point, we may want to hook this into the rest of the Semantic Web, and use automated means to execute certain tasks, but for now, I think it will be very useful for programmers to just look at the message and write for that.

Main pain point

The main pain point in this process was that clients aren’t supplying credentials without being challenged. My initial design postulated that they would (there’s nothing in the standard that discourages it), and so, I could simply include the hm:canBe statements with the data. After thinking about it for some time, I decided to take a detour around a separate document, which would challenge the client for reads and writes alike, and that the data document would only challenge for writes.

There are obviously other ways to do this, like using an HTTP OPTIONS method, but I felt it would be harder for a user to understand that from a message.

Feedback wanted

Please do play with it! I don’t mind if you break things, I’ll reload the original test data now and then anyway. There are some other examples to play with if you look for void:exampleResource at the root document.

I realize many cannot be bothered to create an account to comment, and I haven’t gotten around to configure better login methods on my site, so please contact me by email. I also hang out on the #swig channel on Freenode IRC as KjetilK, and I will be posting this to a mailing list soon too.

BTW, the code is on CPAN. I have had the Linked Data library for 8 years, there are some minor updates there, but most is in a new addon module. Now, I have to admit, all these additions have exposed the architectural weaknesses of the core library, I need to refactor that quite a lot to achieve LDP parity.