Recently I’ve had my responsive thinking cap on and have been experimenting beyond the CSS sheet. By shifting my focus onto performance, I’ve been able to get a feel for how improving it can transform the experience.
By definition, when something responds, it reacts quickly and positively. Performance is big business that can cost companies millions for just a mere 100ms in page speed loss. It came as a shock to me when Tim Kadlec measured a highly regarded responsive site to have taken a whopping 93 seconds to load over a 3G connection. It consisted of 105 requests weighing in at 5,942kB.
It’s this kind of thing that is the polar opposite of responsive.
Flexible images, percentage widths and media queries are all key ingredients of a responsive site — but it shouldn’t stop there. Performance needs to embraced as a guiding principle, instead of an afterthought. Snappy websites create positive experiences, and that’s good for business.
During the build process of this site, I’ve jumped between the front and back-ends, with the aim of cutting as many milliseconds and kBs off the page load. Although this website is small and lightweight, the principles and best practices are valuable and can be applied to something larger in scale.

Initial payload

The most important performance consideration for me was the initial kB payload. When a page loads, the bare essentials should be delivered, and the rest progressively enhanced. For example, my base is made up of:
  1. Basic CSS file (120 lines at 4.25kB minified)
  2. Basic JavaScript file (15.75kB consisting of Modernizr, Turbolinks, Typekit async and jump navigation)
  3. Low-res or compressive images
Beyond that, my web fonts are loaded asynchronously, and demo page assets kept separate.

Async the web fonts

I’ve used Typekit’s async snippet to load web fonts conditionally based off of a Modernizr.load (aka yeynope) @font-face test. Put simply, I’m only loading web fonts in browsers that can actually display them.
By loading asynchronously, initial overhead is cut down and potential JavaScript blocking avoided.
Here’s the code snippet:
Modernizr.load([
  {
    test: Modernizr.fontface,
    complete: function () {
      TypekitConfig = {
        kitId: "xxxxxxx",
        scriptTimeout: 3000,
      };
      (function () {
        var t = setTimeout(function () {}, TypekitConfig.scriptTimeout);
        var tk = document.createElement("script");
        tk.src = "//use.typekit.com/" + TypekitConfig.kitId + ".js";
        tk.onload = tk.onreadystatechange = function () {
          var rs = this.readyState;
          if (rs && rs != "complete" && rs != "loaded") return;
          clearTimeout(t);
          try {
            Typekit.load(TypekitConfig);
          } catch (e) {}
        };
        var s = document.getElementsByTagName("script")[0];
        s.parentNode.insertBefore(tk, s);
      })();
    },
  },
]);
The downside to this is that the fonts are only downloaded once the main JavaScript file has been parsed, which causes a ~500ms delay (on desktop).

‘One page’ with pushState

I’m using Turbolinks (update 13th Dec 2016: I’m not actually any more) on this site, which takes advantage of the HTML5 history API’s pushState and replaceState methods. They allow you to replace the content of the <body>, whilst keeping all the <head> assets in tact. This performs a page load illusion by changing the URL and title dynamically, whilst pushing the new page onto the browser history stack. I like to think of it as legitimate AJAX.
As a result of using pushState, page loads are noticeably faster. I’ve been measuring most of mine to be coming in between 100 – 150ms. Battery life is also conserved because of the unecessary HTTP request avoidance, and the flash of unstyled text is also eliminated.
Keeping the current page instance alive through pushState is good, but does have its caveats. A lot more testing is needed as pesky edge cases do emerge.
The Red Bull Music Academy Radio and SoundCloud are two of my favourite examples of this in the wild.

Using Sass for legacy IE

I used to use Respond.js as an IE media query polyfill. The trouble is that it has JavaScript dependency and XHRs your style sheets. The performance hit is undesirable. However, using Sass @includes, I’m able to compile a media queried version and non media queried version of the same stylesheet. These are included in IE conditional comments:
<!--[if (gt IE 8) | (IEMobile)]><!-->
<link rel="stylesheet" href="style.css" />
<!--<![endif]-->

<!--[if (lt IE 9) & (!IEMobile)]>
  <link rel="stylesheet" href="ie.css" />
<![endif]-->
Nicolas Gallagher is to credit for popularising this method in his excellent article.

Vanilla JavaScript

jQuery is an absolute godsend, but comes with a price: 32kB and a bundle of code to parse. I found that what I wanted to achieve could all be done in vanilla JavaScript, and by doing this eliminated an unecessary burden on less capable devices.
For any larger scale site or application, I’d probably use a combination of Jeremy Keith’s ‘Conditional CSS’ method and yepnope to load jQuery based on screen width or features:

The CSS

@media all and (min-width: 45em) {
  body:after {
    content: "widescreen";
    display: none;
  }
}

The JavaScript

var size = window
  .getComputedStyle(document.body, ":after")
  .getPropertyValue("content");
if (size == "widescreen") {
  yepnope([
    {
      load: "jquery.js",
      complete: function () {},
    },
  ]);
}
RequireJS could also be bundled into the file for including modules and increasing speed. Delivering only what is needed, when it is needed.

Caching

It’s important to let your website’s underlying codebase do as little work as possible. Cached pages are like fast food that can be served up in an instant, whereas non cached pages have to be cooked, warmed, packaged and sent the front-end.
Caching on the server side is an essential performance step for any website or application. In the case of this site, I’m using Memcachier: a memcache addon for Heroku. Memory based cache storage is favourable over disk based because it’s faster (think HDD vs. RAM).
Apache are currently developing mod_pagespeed, which performs a lot of useful HTTP caching and asset optimisation at server level. Ilya Grigorik has an excellent write-up on the many things it can do. I anticipate many will adopt this into their responsive toolkit.

Summary

Performance and page ‘responsiveness’ should be part of the process and not an afterthought. The tools are readily available, we just need to choose to use them, and to use them wisely.
I completely agree with Tim Kadlec’s call to action to create a culture of performance:
If you understand just how important performance is to the success of a project, the natural next step is to start creating a culture where high performance is a key consideration.