Mistserver – Optimize the HTTP delivery via caching

In the previous post, we looked the new features of MistServer (version 1.1).
varnishToday, I expose an idea exposed by a friend Nicolas Weil. We talked about the content caching for an architecture based on Mistserver especially for HTTP based format. We thought about Varnish, an HTTP accelerator. The idea is to keep in cache the different fragments which are generated by MistServer. This article will not talk about the RTMP or TS part, theses protocols are not HTTP based.

To make this test, I use on the same server MistServer and Varnish. Here is the architecture.
varnish-mist

As you look, my Mistserver HTTP based protocols are on 8081 port. The operating system is Ubuntu and I follow this tutorial to install Varnish. Then I setup Varnish and fill which is his backend. Open the /etc/varnish/default.vcl and update thoses lines :

backend default {
    .host = "127.0.0.1";
    .port = "8081";
}
sub vcl_recv {
    set req.grace = 30s;
    return (lookup);
}
sub vcl_pipe {
    return (pipe);
}
sub vcl_pass {
    return (pass);
}
sub vcl_init {
    return (ok);
}
sub vcl_fini {
    return (ok);
}
sub vcl_deliver {
        if (obj.hits > 0) {
                set resp.http.X-Cache = "cached";
        } else {
                set resp.http.X-Cache = "uncached";
        }
        return (deliver);
}

To help on debug,  I add in vcl_deliver a X-Cache header with values cached – uncached. HTTP files will be in cache during 30 seconds. You can change the value very easier.

You can optimize the configuration but it’s not the object of that article and you can find many informations on the web.

To check the good configuration,  get a content with curl command line. Be careful, we are not testing to call the MistServer but the Varnish cache, so we will point to the port 80 not 8081.

$ curl -I http://ip_address/dynamic/mystream/manifest.f4m

Check the HTTP header :

HTTP/1.1 200 OK
Cache-Control: no-cache
Content-Type: text/xml
X-UID: 0f4405cc8b4b43312a8f35d7ba0b043e_sintel_dynamic
Content-Length: 815
Accept-Ranges: bytes
Date: Thu, 11 Apr 2013 14:30:19 GMT
Age: 0
Connection: keep-alive
X-Cache: uncached

You can see the uncached value on the last line.
If you make a new call, before the end of the 30s of cache, the content will be in Varnish cache and MistServer is not called :

HTTP/1.1 200 OK
Cache-Control: no-cache
Content-Type: text/xml
X-UID: 0f4405cc8b4b43312a8f35d7ba0b043e_sintel_dynamic
Content-Length: 815
Accept-Ranges: bytes
Date: Thu, 11 Apr 2013 14:30:21 GMT
Age: 2
Connection: keep-alive
X-Cache: cached

This is the magic of HTTP caching ! 😉 Mistserver is not used when content is cached by Varnish.

You can check too with the varnishlog tools to see if Varnish call the backend or not.

If you want to use this kind of configuration for production for example, you can apply this kind of architecture : multiple varnish caches (called edges), as physical servers, in front of Mistserver (called origin).

varnish_mist_multiple

You can divide by many the load on the origin server and Varnish was created for this kind of usage.

To finish, you can apply this kind of configuration for VoD or Live. For VoD, the cache lifetime can be more than 10 minutes, for live, I advise a short cache lifetime, the reason are simple to understand.

Have fun !

2 thoughts on “Mistserver – Optimize the HTTP delivery via caching

  1. Jose Infanzon

    Your comment is awaiting moderation.
    Hi – My name is José Infanzón, I’m a asoftware engineer, and currently I’m an implementation of a low-cost OTT platform in a local cable company. We have hls streams, and I’ve read your article about mistserver and varnish. I’ve implemented varnish 4 with the basic configuration mentioned in the post, but I was wondering if you have any sample vcl for HLS streams that may be more tuned.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.