Skip navigation

Monthly Archives: February 2009

Today I did a 7 question interview for Jason Salas on web hooks. Pretty standard stuff, but it allowed me to bring up some things I haven’t written about yet. His final question was about barriers going forward with the movement, which I also answered with a general “what’s coming” sort of response:

It seemed like the biggest hurdle originally was getting people to wrap their heads around this idea. I would always talk about it in the abstract and go on about all the implications of what was essentially one line of code. I think there are enough fairly well-known examples now that it’s easier for people to join the party. Even then, the general perception of what’s possible is going to be limited by the examples.

Like AJAX, you can’t just build a popular example of AJAX without it being a useful tool itself. I can build all the web hook prototypes I want, but it’s not until the Googles and Facebooks implement them in a useful way that people will really see the value. Until then, we get incremental boosts by the smaller companies like Gnip, GitHub, and others. I’ve started working or talking with these guys to get them involved in a collective conversation around web hooks, so we can work out issues standing in front of adoption.

The issues people come up with are usually security and scalability related. As it turns out, some of these issues have been solved by these guys already doing it. So I’m trying to get more of them to share best practices and publicize their use of web hooks. This way people can start seeing the different ways they can be used. For example, the Facebook Platform, although pretty complicated and full of their own technology, is still at the core based on web hooks. They call out to a user-defined external web application and integrate that with their application. That’s quite a radically different use of web hooks compared to the way people think of them in relation to XMPP.

Moving forward, I think we’re going to see more libraries and tools that have solutions to scalability and security built-in. I’ve started one project called Hookah that I’m hoping to get released soon. It provides basic callback request queuing and management of callback URLs so you really can implement web hooks with a single line of code for each event. We’re also starting to see similar helper libraries for frameworks like Django and Rails.

Eventually we’ll be seeing specs for doing specific things on top of web hooks. One of the first things on my list of standards to look into is the way in which you register and manage callbacks in a programmatic way. Many web hook providers use a web interface to manage your callback URLs. We’ll see some neat things happen when you can manage them via APIs so that tools can set callbacks with services on your behalf.

Anyway, one of the reasons I’m so attached to the idea of web hooks is that I see a lot of long-term potential. Especially when you integrate them into other visions of the future, like the Web of Things. When you combine the Programmable Web with the Web of Things, you get a world of Programmable Things.

That’s where I’d like to see this end up.

A few days ago at BarCamp Miami, software architect Ryan Teixeira gave a talk about web hooks loosely based on my Programmable World of Tomorrow talk. It would have been nice to see, but I guess the next best thing is to see the slides on SlideShare. Take a look:



Nice job, Ryan!

Okay, it’s not really a dispute. That was sarcasm in the title. Just to be clear, since there’s always been a tiny bit of useful friction between the idea of XMPP and web hooks, it’s important to remember they are not mutually exclusive. Anybody hyping a battle between the two is trying to create or is imagining a controversy that isn’t real. Many proponents of web hooks are XMPP proponents as well. In fact, the video some are pushing around to promote this supposed throw down is of two XMPP supporters (including djabberd author Brad Fitzpatrick) demoing a pubsub system based on web hook callbacks. And it takes place at an XMPP meetup, so of course there was going to be some proselytizing.

Nevertheless, Jabber/XMPP is a messaging protocol. Web hooks provide a model for functional extensibility, so they are a platform for many different things. A push-based pubsub messaging system is just one use. Even though I originally wrote about web hooks as a notification mechanism, my mind was nowhere near pubsub. I was more focused on the idea of web service integration and orchestration. With the commoditization of CGI-enabled web hosting, I was thinking about how the popularity of web programming combined with the easy invocation of HTTP requests could be used to make a more useful and functionally extensible web. A more programmable web.

With that said, there is nothing wrong with discussing what you can and can’t do with web hooks and XMPP, but there is not some angry fight between camps. Done. Over. Moving on.

As it turns out, the project demoed in that video, which is called pubsubhubbub, is something I had previously stumbled upon while browsing projects a friend of mine was involved in. There wasn’t much of a description, and I didn’t dive too deep into the code, so I bookmarked it to come back to it. After that demo happened, it seemed I misheard (from several people) that it was some XMPP pubsub system, which totally confused me because… it’s not. It’s a neat, distributed pubsub implementation built on web hook callbacks created by Brad Fitzpatrick and Brett Slatkin (a Google App Engine developer). Hopefully they’ll put some more documentation up as it develops, but it’s just really neat to see some XMPP folks build an open pubsub system with web hooks. Cheers to them!

My involvement at NASA inadvertently got web hooks written about on O’Reilly Broadcast yesterday. Kurt Cagle did a nice write-up on his take on web hooks, and it’s possible there will be more coming from Cagle on the topic. Although the post at first seemed pretty framed around syndication and push, the fact that he says things like “server-side mashups” and sees web hooks as a means to “create orchestration of web services” shows he gets the greater significance of this simple mechanism.

I just wanted to cover a few things that were brought up by Cagle and a few others that have written about web hooks recently.

Replacing Syndication

It seems a lot of people see web hooks as an alternative to poll-based feeds and syndication. Although I’ve claimed before that “feeds are not the answer,” it was in context of the vision of pipes for the web. Feeds were not invented for pipelining. They were invented for simple content syndication, and I think they do a pretty good job at that use-case: answering the question, “Hey, what’s new from you?” That said, Cagle seems to be spot on about web hooks and syndication working together.

One of my original arguments for web hooks was that polling sucks. “Hey, what’s new from you?” becomes “Are we there yet? Are we there yet? Are we there yet?” The thing is, web hooks alone don’t let you ask, “What’s new from you?” Nor do they provide a persistent reference to data. Web hook payloads should be ephemeral.

What seems like an obvious solution in this use-case is to provide a feed and a web hook for notifications of updates to the feed. This way you have the feed, which is nice for people that like polling, but also to have a persistent resource on the web for that content stream. And then you have the optional web hook for getting notified of updates, potentially with the updated data so if you don’t want to retrieve the feed, you don’t have to.

There’s a slightly verbose standard spec (as many are) called GetPingd that shows a way you can do this, but I imagine there are simpler approaches. One thing GetPingd points out is that this is all very similar to the ping services for blogs to notify search engines of new content. The missing element of that system is the ability for anybody to subscribe to the notifications. That is part of the essence of web hooks, as Timothy Fitz recently tweeted:

Remember, HTTP callbacks are nothing new. It’s exposing them to the user that makes it a web hook, and that’s where the emergence is.

Anyway, I can understand asking the question, “Will web hooks replace feeds/syndication?” as a thought experiment in trying to understand this new paradigm. But I have to say, after thinking about this for a long time, they won’t. They might replace certain use-cases for feeds, but if feeds were broken enough that web hooks would replace them, it would have happened already.


Now this is where we get into some interesting waters. A lot of people bring up good points on both sides. My stance is simply that web hooks are simpler and just as effective for the majority of use-cases, and therefore the obvious winner. There are less pieces, simpler APIs, existing infrastructure, and it’s debatable whether XMPP is inherently any more performant.

You have to consider the use-cases, though. Part of the vision of web hooks is to have a standard HTTP event mechanism for the web. I just don’t see every web service throwing up an XMPP stack along with their HTTP stack. The two can and will work together when necessary, but as Cagle notes, “web hooks in general may be superior for orchestration.” Remember, web hooks are about more than message passing.


Cagle briefly touched on the standards issue. I’m sure that having a nice standards document would make for great adoption propaganda, and I know quite well the significance of agreed upon conventions in technology. However, I’m not in a hurry to over engineer anything, and I’m not going to assume we’ll know everything about the implications of this mechanism that we can encode them in a document that will either be ignored or adopted by everybody, making it harder to adapt to change. The longer we can put off standardizing, the better.

In the article, Cagle compares it to AJAX, in that the community isn’t very standards oriented. I’m not exactly sure what the AJAX community would benefit from standards. I’m quite happy that AJAX wasn’t limited by a standard to only use XML. There is nothing wrong with options. That’s kind of the whole point of technology: to provide new avenues, options, and choices for empowerment. Tools will always be used however the tool user finds useful, which is not always how the toolmaker intended.

I would much rather provide examples, and rigorously defined patterns of usage and implementation than try to define a standard. When a globally accepted convention is necessary, then we can work one out (with a useful, ideally proven, implementation), but it will probably be about some aspect of web hooks, not the model as whole. I think the biggest aspect ripe for standardization is for machine-friendly announcement of hooks and mechanism for registering callbacks (aka subscribing). But this is not preventing web hooks from being useful, otherwise nobody would be using them already.

Email is one of our oldest and most used Internet systems. Frankly, though, I hate its dated implementation. Yet I understand how tested and universal it is, and as such, think it would be a good idea to allow it as a means of interaction with web services. I just don’t want to have to touch its crappy internals. This led me to develop the now defunct Mailhook (succeeded by smtp2web and Astrotrain) as a way to leverage web hooks to help ease the pain of accepting email in our web applications. It was a simple SMTP server that would pass parsed email to a URL you specify as if it were a form post.

What Mailhook did not do was provide programmatic access to existing mailboxes. That’s okay, though. We’ve got IMAP with libraries in most languages, so that’s pretty taken care of, right?


In my ideal world, you could access mailboxes via web services. The advantage of this would be that you didn’t have to depend on local IMAP library support, nor would you have to deal with its slightly arcane semantics.

So I pondered this for a while and would occasionally talk to people about it. One day, while working at CommerceNet, I mentioned it to Lisa Dusseault, an IETF Application Area Director and standards architect. She expressed interest in the idea and we ended up scheduling a few hack dates to simultaneously work on a standard spec and implementation for accessing IMAP via HTTP.

The result was this IETF Internet-Draft and a rough implementation of an IMAP to HTTP gateway using this proposed interface. Lisa summarized this project on her blog. It was a very simple, RESTful approach using Atom for mailbox listings.

If this sort of interface were adopted by major email providers, not only would email finally have a nice API, but it would become better integrated with our web ecosystem. For example, email messages would have URLs, and mailboxes would have Atom feeds. Sure Gmail might have that sort of thing, but as a standard it has the potential to be an implementation/provider-agnostic interface you can generally expect to have available.

Here’s a result of the prototype I built with her. My inbox is pretty much directly represented just fine in a feed reader as it would be in an email client.


One of the nice things about Mailhook that none of the successors ended up doing was parsing the email message for you into a user-friendly data structure of standard HTTP parameters. This way you wouldn’t have to parse a MIME document. Because smtp2web was designed for App Engine and Python, they expected you could parse it yourself quite easily. My approach let you skip this and not become dependent on a local library. I also translated the semantics of file attachments to file uploads, which let you take advantage of the existing support there.

The last bit of thinking I did on this HTTPMail project was along those lines of convenience parsing. Perhaps as an optional interface, you could get URLs not only to different message representations, but to the insides of messages, including MIME parts and attachments. For example:
would get the full rfc822 representation of the message
would get the text/plain MIME part
would get the text/html MIME part
would get the attachment
might get a JSON representation of all the headers

Anyway, I just want to be clear the point of this is that if you have this kind of interface… or even a gateway service like the one I implemented, it makes reading mailboxes from web scripts that much easier and accessible. And like Timothy Fitz recently suggested, this would make email a team player in our web ecosystem.

Timothy Fitz recently wrote on What webhooks are and why you should care. It’s a very clear and straightforward description of just that. It helps to have people other than me talking about web hooks. Granted, Timothy is a good friend of mine, but it did spark a good discussion in his comments and on reddit.

webhooks08_udell135Also, Jon Udell brought up web hooks in a recent post regarding Assembla’s usage of the model. It shouldn’t be a surprise that he considers the adoption of web hooks a game changer. I still quote him in my presentations for envisioning in 1997 “a new programming paradigm that takes the whole Internet as its platform.” It’s an idea I’m quite fond of that I believe requires more than web APIs.

While I’m at it, Joe Gregoria posted his initial reaction of web hooks last week. His major critique was the lack of rigorous text around the model. Certainly I write a decent amount about them, but I avoid specifics. My focus right now is sharing the big picture, getting people to implement them on their own, and documenting the different discoveries along the way. I personally don’t feel a need to try and standardize very much yet, but perhaps better specifics on “how to provide web hooks” would be a good thing to get documented.

I’m glad a wider discussion is starting to emerge. I’ve stumbled across several other blog posts on web hooks recently. Some of the issues people bring up are authentication, scalability, and reliability. I plan to cover these issues in upcoming posts since they’re pretty straightforward, but you’re all welcome to participate in this discussion. Feel free to join our discussion list, write a blog post, leave a comment, tell your friends, or best of all: try implementing the model yourselves.

Speaking of the discussion recently, it was pointed out that Uche Ogbuji started publicly thinking about this about the same time I did in 2006. He called them web triggers as inspired from the database world. I chose web hooks coming from the programming world. There are a lot of words that describe different aspects of the same pattern: events, signals, callbacks, hooks, triggers, handlers, listeners… I stuck with hooks not just to keep the name simple to pronounce and differentiable from common code-level event terminology, but I also liked the idea of “programmability” it implies.

Hopefully web hooks are actually approaching critical mass. Just to be sure, a couple weeks ago I submitted a proposal to OSCON to speak about them. Here’s the abstract I sent.

Web hooks are going to make the web programmable. Three years ago, I stumbled upon this simple architectural pattern in web applications and was struck with a vision of the real programmable web. I’ve since realized today’s programmable web is hardly programmable. It is programmatic. Web API’s give you the power to programmatically use web applications, but they do not let you program them. That is, extend them, customize them, or fully integrate them with each other.

Mashups, the poster child of the programmatic web, are useful, but they also show what happens when you can’t directly integrate web services: you get new ones. Mashups represent the aggregation of services, not the integration of services. However, this is not about some grand proposal for a standardized way for all web services to integrate with each other. This is really about something simpler, and as a result has even richer implications.

Web hooks are about applying the old, simple concept of the callback to web applications. This simple mechanism is changing everything, perhaps even more than did the web API. By allowing users to hook into the logic and events of your systems, you go beyond user-generated content into user-generated functionality. Imagine users extending your web application with new features that can be shared with other users. Imagine discovering your web application suddenly fully integrated with complementary services. If there was a service equivalent of open source, this is pretty close.

After three years of letting just the mechanics of web hooks spread and develop, a number of solid case studies from tiny start-ups to giants like Google and Amazon have popped up, demonstrating the implementation. Sometimes it happened from my influence, sometimes it was just the practical way to solve a problem. Some people latch on to the notification use-case to eliminate polling. Only a few seem to see how the same mechanism can be used to create plugin architectures and platforms. But it also realizes a vision of pipes for the web, allowing you to conceptualize web applications as components that can be strung together to create something more than the sum of the parts.

The point of this talk is to spark public conversation about web hooks. To get us thinking about them. Possibly implementing them. The model has been cooking for a while, and there are lots of examples, demos and ideas to share after thinking and talking about it with other developers for the past three years. More recently, there’s been an influx of activity around the idea, so it’s getting hot. I want to show people just how far this simple, yet novel idea can take us…

When the web was started, it was about these hyperlinked HTML documents… just pages of information. The web as a whole was collectively just about content. Then, thanks to a bunch of hacks that led to CGI, you had this optional tool as the webmaster to help augment static content with dynamic content. Most immediately, this was used for search, a feature that the web did not come with built-in.

Skip ahead about ten years and this concept of running arbitrary code on web requests with CGI was used and understood enough to finally fix this mostly centralized, one-way flow of information. Finally, anybody could easily publish content on the web through blogs, wikis and comments. This caused such a change in the use of the web that we decided to call it Web 2.0 (and haven’t been able to avoid going meta since–sorry). This two-way flow of information turned it into a collective conversation that the marketing folks today call “social media.”

As a byproduct of using CGI more and more, the web also became generally more dynamic. For example, the commercialization of the web happened once we figured out how to securely use this CGI business to do payment processing. The first killer “application” on the web was the shopping cart. The web slowly started to provide functionality along with content. Today in the industry we talk less about web pages (providing information) and more about web applications (providing functionality).

Our web today is not just a social media platform, but an application platform. And applications do things. Applications represent the augmentation and automation power of computing. I know we’re social beings and communication is our primary means of interaction, but forgive me if I think the empowerment of computational utility is cooler than social media. I’d rather use computers to solve problems holding humanity back from self-actualization than to merely add more channels to the echo chamber. Ahem.

I think social media is important, but it’s stealing the spotlight from the functional potential of the web. Beyond sharing-information-with-people-we-know. We’re still caught up in content, yet it was functionality (e-commerce or “the shopping cart”) that allowed the web to be commercialized. It seems like we under appreciate this aspect of the web.

So what’s the point? This whole time I’ve been trying to set up for an assertion about the future of the web. Here it is:

The web was originally about content (web pages). Then it got functionality (CGI, early web apps). It used this functionality to fully democratize its content (blogs, wikis, etc). Next it will democratize functionality. We’ll have user-contributed functionality just as we had user-contributed content.

What does it mean to have user-contributed functionality? Kind of what it sounds like. Just like you can “contribute” a photo to Flickr, you’d be able to “contribute” a feature (new functionality) to Gmail.

It’s kind of like open source, although a bit more consumer friendly. Like open source, if you want a program to do something different or work with another program, you can make it do that yourself. You can even share a patch so others can get that same functionality. The difference with web applications is that most will never give you their source. And if they do, it would be a nightmare to try and integrate everybody’s patches with the latest deployment. Open source just doesn’t quite translate to the world of web applications.

So if you can’t have access to the source, how can you contribute functionality? Gee, that ad-blocker you have in your browser is pretty slick. Did they need the browser source to make it? No? What’s that? Yes, a plugin system! How do plugin systems work? Right, they provide hooks for external code to run.

I think you see what I’m getting at. Web hooks open web applications up to functional extension and personalization. The plugin metaphor also holds about the ease of use. Not everybody can write a plugin, but anybody can install a plugin. User-contributed functionality will be just as easy to install (if at all), and even easier to write than most plugins. Plus, not only can it be shared between users, but potentially across web applications because the web is a common protocol.

So is user-contributed functionality just plugins for web applications? Yes! I’ve been saying web hooks will enable push, pipes and plugins for a while… but who knows what I mean by that. It’s taken several years just to get people to understand what web hooks are, hopefully it won’t take as long to convey what role they can play. User-contributed functionality seemed like a pretty good way to convey their power to customize and extend.

Anyway, there you have it. It’s already starting anyway. What are Facebook Applications but plugins over HTTP, submitted by users? How long before we see Gmail Labs go from just features by internal teams to features by users?

So, I invite you to imagine a world where what you can do with applications on the web is not limited by those that made them.

I’ve got a story for you. And it ends with a fairly unique take on the future of the web. I’m not about to call it Web 3.0 because I detest this long running meme of “let’s define an era before it happens,” but it is the future. And it’s two parts because it’s late and I’m tired.

So one of the key characteristics of Web 2.0 was this idea of “two-way media” or “conversational media” where users could easily publish their own content and comment on the content of others. This gave rise to the phrase “user-contributed content,” where we let the end-user create the content. It took over ten years for this idea to catch on and become a reality. It’s so great, and we’d never go back.

Some of us are privy to the fact that this idea of a two-way, read-write web was the original vision for the web. Tim Berners-Lee’s first web browser included an editor, but publishing didn’t work because the write verbs in HTTP weren’t implemented in early web servers…

Wait, come to think of it, even to this day web servers don’t implement write methods.

Does Apache itself handle PUT or POST requests as intended? No, Apache delegates the proper handling of these to CGI scripts or a module that essentially runs CGI scripts more efficiently. Most web servers still do not implement the write methods of HTTP. They don’t because before we got around to it, something happened.

In fact, I’m not sure many of us realize this fundamental change (other than the original developers of the web) because we’ve all just accepted the way things are as the way things are. Something big happened in 1993 that changed the entire conceptual model of the web.


Web of linked HTML documents

In the beginning, the web was conceptually about serving up these HTML files that would link to each other. If you recall, the path of the URL was just a subset of the server’s filesystem that was mounted to be served by CERN HTTPd or whichever web server. Apparently the killer feature of the web was rendering HTML (so you had inline hyperlinks, among other things), which you had to get somehow, and so GET was all that mattered from HTTP. If you wanted to put something online, you didn’t need a browser, screw HTTP, you just had to FTP a file to the server. Easy enough, right? It was, back then.

Nobody cared to do anything interesting with PUT or POST. However, they were interested in using this fancy new web protocol to access other protocols. So they started hacking the web servers to run scripts that would query WAIS or some other obscure protocol, usually for search because that was a big problem then.

Eventually Rob McCool drafted a spec for CGP or Common Gateway Protocol that would make a standard way for these little scripts to be run by web servers. It was eventually renamed to CGI (since it’s more of an interface than a protocol), implemented in Apache, and that was that. Now you could run scripts on web requests! Most people used CGI for search, which makes a lot of sense, but the rest of the content was still directly served up HTML files because, well, that’s how the web works, yeah?

Slowly, more and more people started doing clever things with CGI. A few people decided to respect the HTTP spec and utilize the proper verbs for write actions, but this CGI thing was kind of a hack, I don’t think ever seriously intended by Tim Berners-Lee, and so people were abusing the semantics, doing destructive things with GETs, ignoring DELETE and PUT. Well, even today people still do this, just less so… anyway…

Eventually our websites got complicated enough and ambitious enough with CGI that almost all requests would go through CGI instead of serving up static HTML documents. And instead of silly filename paths of files that aren’t really on the server, we can make up useful, descriptive paths with the date and title in them.

Today our web is not about serving up files on the server, but generating files to serve. Our web of pages is now generated by “higher-order” web applications that are no longer simple scripts, but complex software.


Apps are "higher-order" nodes

If you ask me, those little hacks got a little out of hand, but I guess it’s for the best. Otherwise, we wouldn’t have Gmail or Amazon or Wikipedia… we wouldn’t have web applications that did useful things, we’d just have a bunch of static HTML documents linked to each other managed by webmasters and central authorities. None of this democratized media business.

Thanks to CGI we got the read-write web, but we also made the web way more useful than it was intended. Suddenly browsing to a URL would run some code. And code… well, code can do anything.

Next we’ll build on this idea and see how web hooks can change the game again!