I’ve just returned from Web Directions 2014 in Sydney. Overall, it was a great experience, with some valuable take-aways. I’ve jotted down a few of my thoughts around the conference — and a few specific sessions — below. These are my “notes from notes” — I’ve drawn ideas from the notes I took during the conference to consolidate my understandings of the presentations.
This is very much non-exhaustive — I’ve elided a lot of stuff, skipped over whole talks (either because I want to ponder them further, or because I didn’t take notes or my notes are useless). I’ve focussed primarily on the technical talks, as I’ll use this post as a reference for myself on these areas at work. If I get a chance, I’ll write up my thoughts on the keynotes as well.
Emily Nakashima: The Operable front-end
Emily, who works at GitHub, outlined exactly what “operability” is, and outlined some ways of achieving it.
Operability requires strategies for deployment, logging, monitoring, debugging, alerting, and scaling in production. That is, it’s the ability to detect, diagnose, and rectify problems, with the ability to push fixes into production. It’s important, therefore, to use dashboards and automation to ensure that everything that happens after a fix is ready to ship is as smooth and streamlined as possible.
Aside: if you fix a bug, you should look for similar error reports from other users. You’re now the expert in that bug, and are best-placed to respond. This ties in well with SEQTA’s, and my own, philosophy that developers should always always always be in a position to talk to clients, and doing so should be a routine part of their job.
JavaScript error monitoring is one of the most important pieces of the operability puzzle. A great idea is to add a listener for the error
event on the window
, which collects up various bits and pieces (URL, stack trace, time since load, event target — inc. xpath or CSS selector) and shoots them off to the server via AJAX. (Of course, you want to be careful not to try and send errors about your inability to send errors…) To make this really shine, stop swallowing errors (even though many frameworks advertise this as a “feature”) — in fact, make your own error classes and throw them!
Performance metrics are another part of the jigsaw. They can be synthetic (that is, measured in a carefully-controlled environment), or RUM (real-user metrics). Synthetic metrics should be part of the CI infrastructure; they’re primary utility is to catch performance regressions. RUM are arguably more useful, and should be tied to monitoring and alerts. RUM can be collected using the shiny new navigation timing API — but for single page applications (like the SEQTA Suite), you’ll need to record your own metrics — window.performance.mark
and window.performance.measure
APIs come in useful. An API which hasn’t yet landed — but is coming soon! — is the frame-timing API, which will help to catch jank issues (jank is where the framerate drops below 60fps).
Accessibility is often the “poor cousin”, often because it’s really hard to audit in tests or post-commit hooks etc. Emily has found that scanning the live DOM for accessibility issues, and throwing exceptions (to be caught in window.onerror
) is a great way to check for these issues in live, dynamically-generated content. You’re looking for things like missing alt tags, or buttons without text content.
Useful tools:
- New Relic
- Errorception
- Raygun
- LogNormal
- Google Analytics
- Circonus
- Whatever ops is using 🙂
Sarah Mei: Unpacking technical decisions
organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations
Eric Raymond’s retelling:
If you have four groups working on a compiler, you’ll get a four-pass compiler.
Takeaway: most of the data we look at when considering a project is social, not technical data.
Mark Dalgleish: A state of change
Mark outlined the new Object.observe
capabilities in ES6, which lets you track mutations on an object. The main thrust of his talk, though, was about mutability vs. immutability. Traditionally, functional languages have favoured immutable data-structures, as they’re easier to reason about and work with. Lots of JS frameworks are now moving towards representing the state of the application in a series of (immutable) objects; theoretically, this allows you to treat the DOM as a renderer doing its thing somewhere else — and therefore never actually pull anything out of it.
I sent Mark a couple of tweets about using Object.observe
to “fake” immutability (and avoid using massive chunks of memory and causing GC pauses), and caught up with him after the talk to discuss this a bit further. Mark noted that he’d also wondered this, but hadn’t yet had a road-to-Damascus epiphany. We chatted about the fact that if the browser (or the DOM) were to introduce truly immutable data-structures, it could then become the engine’s problem to optimise memory — and the engine could do this much more easily than could be done in JS, and more efficiently, too.
Sarah Maddox: Bitrot in the documentation
Technical documentation inevitably suffers from broken links, outdated information, pure fiction, and information overload — caused by changes to the environment, updates to the documentation platform, last-minute changes to the software being documented, and plain-old human error (which, strictly speaking, encompasses all of the above). Sarah outlined some ways of overcoming this:
- Automated testing of code samples, to detect breakages in the samples, but also breaking changes in the API.
- Doc reviews in the engineering team’s procedures — that is, making the definition of “done” include documentation. To facilitate this, use the same issue tracker, same review tools, and include the technical writers in the code reviews.
- Collaborative spot testing sessions.
Jeremiah Lee: Elements of API excellence
The single most important point that Jeremiah made (in my view) was that APIs are built for human beings, and that they should therefore be treated as a UX problem. In other words, if you want to build good APIs, you need to understand the people who will use those APIs.
An excellent API should therefore be:
- Functional: it should do what it says on the tin;
- Reliable: it should be available, scalable, stable, and secure;
- Usable: it should be intuitive, testable, and provide corrective guidance; and
- Pleasurable: the API should become the means by which great things can be done.
To build excellent APIs that have, at their foundation, a really solid UX, we should make use of the same tools used by graphical UX work. One of these tools is the concept of “personas”. A persona is a descriptive representation of the people who will use the product — and the context in which they operate. By constructing personas, we make visible our assumptions about our users, and provide a frame of reference for the entire team. Personas should, of course, be validated and built around user interviews and surveys — that is, real people.
There are some key aspects of the people who will use our APIs that need to play into the personas we build:
- Their relationship with the product;
- The platform and programming language they’re working in;
- Their experience and skill level;
- Their English proficiency;
- The motivation behind the integration work they’re doing;
- The resources they have available; and
- Their role within their organisation.
Ensuring that the APIs we build actually do work, and are usable, we need to undertake testing. We should test both passively and actively.
Passive testing involves looking at pieces of data such as support requests, and answering questions like “where are users asking for help?”, “what concepts are frequently misunderstood?”, and “what errors are hit often?”. We should look at API usage during an integration, extracting data such as the time between app registration and first request, the first requests and errors encountered, and the time between start of integration and production. We should also look at API usage after integration to find out information about which endpoints are actually used, and how — this will help to answer questions around what the integration is trying to achieve. It also makes it easier to track down antipattern usage of the API.
Active testing is, by its nature, going to be used more infrequently. For existing APIs, you can take the “dumb pair programmer” approach — basically, go sit with someone working against the API and silently observe what they’re doing. Look for things like interactions with their team (how do they talk about us?), how we fit into their application, how they approach the integration, what problems they encounter (and how they go about solving those), and how they test the integration.
For new APIs, we can build throw-away prototypes, or a mock API (a simple façade over an existing API, or a fully-mocked API). In either case, there should be just enough functionality to be useful, and it should be fully-documented. Put together a well-defined project, ready for an integration, and get an outsider (who lacks insider assumptions) to build the project. You can go to the extent of recording their face and the screen as they work to see how they react; alternatively, or additionally, have them commit their work regularly (every 10m or so) so that it can be tracked. This allows you to then track emotional responses, and see how long tasks took to complete. It also helps to see where the process can be made more affirming, and how errors can better be handled.
The secret to machines talking to machines is to speak human first.
API versioning was brought up in the questions afterwards, and Jeremiah noted his repugnance for such schemes. A recommendation he had, though, was to maintain the two versions for a period of time, and allow developers to “opt in” to the new version when they’re ready: this is the approach taken by Facebook.