Skip navigation

Last month I talked about webhooks at Glue, a new conference on the “glue” of the web. One of the other speakers was Josh Elman of the Facebook Platform team. I ended up on the flight back with him, so we talked about webhooks. He seemed excited about them as a fan of the callback/hooking pattern in classical programming. In fact, he mentioned how they were experimenting with them already for notifications within the Facebook Platform, but he brought up the issue of batching. Something I hadn’t thought of before, but is important for large-scale implementations that are likely to be posting a lot of events to a target endpoint.

Later, when I read about Google Wave, and how they use webhooks in their API for creating bots, they mentioned that they may batch events. I’ve yet to crack open my Google Wave developer account and play with their implementation, but I’ve since realized a very simple convention for batching: JSON lists as the outer structure.

This works because an event object is ALWAYS going to be a key/pair object. So receiving code just has to check if the payload JSON is an array or an object. If it’s an array, handle each object inside as separate events. If it’s an object, handle as you would.

This doesn’t work for POST variables, since it’s very difficult to do arrays in general, but particularly as the outer data structure. It’s just not made to be able to do that. So this would require you to use at least JSON. It works in XML as well, but because there’s more variety in XML payloads, it’s probably harder to settle on such a simple convention as this.

Let me know what you think. Or if any of you already know how Google Wave batches events in their Robot API.

8 Comments

  1. Ruby frameworks fake arrays in POST vars with field names like event[0][name]. Kind of hackish, so I tend to prefer JSON as you suggested.

  2. Right, they put everything under a single value, which can then be an array (which btw, seems to be slightly unstandardized in many CGI implementations).

  3. A question I have about webhooks is how will large sites handle having to send out potentially hundreds of thousands or millions of http requests to everyone who requested a hook?
    Like if CNN had a hook people could sign up for to get the latest stories, they would probably get a lot of subscribers. Having to make a million http requests for every new story seems like a large load for a content deliverer to have to handle.
    Especially assuming that each request waits for an ack. What if a lot of the endpoints are slow or down? It just doesn’t seem to scale very well.
    What are some of the potential solutions to sending out a million http requests?

  4. Sure. One solution is to delegate it to a service that specializes in queuing up those requests. Like blog ping services. But perhaps something like Hookah will grow into a highly scalable “enterprise” event dispatcher.

    Scaling is going to be different for different topologies though. If you have a large number of events, but not necessarily millions of consumers, perhaps something like the Twitter Stream API is better suited.

    But I think it’s not terribly likely there will be a service that will have a million webhook consumers. That would be nice though.

    It really depends on what you’re trying to accomplish. I think in the example of CNN, a simple RSS feed is fine. It will be timely enough and scales much easier.

    Webhooks are not a silver bullet by any means.

    I guess, I’m curious why you’re interested in scaling webhooks at that level? This question keeps making me think pre-mature optimization.

  5. No reason really. I’ve just been following it’s growth and that has been the one major problem I see with it.
    CNN and Twitter were actually the kinds of services I was thinking would have a scaling problem with web hooks.
    But I like how you explained that it isn’t necessarily the proper answer for those types of services. There are so many services that it is a superior answer to however, that it should definitely have its place in design discussions.

    It could still be a problem for smaller sites only having thousands of requests to make though. Most small services don’t want to use an “enterprise” solution because of costs, so they will have to solve the scaling problems themselves.
    For example, lots of people have great services running on shared hosting plans. If they start having to send thousands of requests they could start having problems pretty quickly, maybe forcing them into higher hosting costs a little sooner than they would normally need to. (I don’t think shared hosting providers would look to kindly on lots of requests going out all day)

    ps. I just noticed the google groups link at the top. That’s probably a better place for my curiosities. ;)

    • Hi Oliver

      In danger of pointing out the obvious, but this sounds like the classic ‘does it scale’ question is being applied here as it’s applied to everything. The point with webhooks is that the model is inverted: Whereas polling the ‘million-to-single endpoint’ problem is hard to scale, with webhooks/push, scaling of the ‘single-to-million endpoints’ is at least addressable (e.g. queues).

      As Jeff says, webhooks are not a silver bullet, but in flipping the comms model they give us a weapon to resist the polling onslaught (groan).

      dj

  6. There is an RFC for batched HTTP requests (http://www.snellspace.com/wp/?p=991)

    Regards,
    tamberg

  7. Yes, multipart messages seem like a great way to batch requests. However, they’re a little cumbersome to deal with on the receiving end and aren’t terribly well supported by many libraries.


Leave a reply to DJ Adams Cancel reply