Who Should Pay?

In more than a handful of conversations lately, it’s become quite clear that we, the web development community, are prioritizing our own convenience and our own time over that of our users. With our industry’s focus on “user-centered design”, you might find that hard to believe, but it’s true.

Here’s one example. In reaction to my post on why I think CSS variables are a bad idea, SASS core team member Chris Eppstein had this to say:

Fundamentally, I agree with his sentiment: A preprocessor should not be a requirement for authoring CSS. Thankfully, it never was; you can build amazing things using only hand-authored CSS. And if you find a preprocessor helpful to your process for one reason or another, great. But using a preprocessor never has been (nor should it ever be) a requirement.

But Chris was not railing against preprocessors. Instead, he is echoing a sentiment held by many people in the preprocessor community. He feels CSS is not as powerful as it could/should be and he hopes that one day soon preprocessors won’t need to exist because CSS will have all of the features they offer. Like variables.

I used to feel that way. I used to want variables… and mixins… and functions… and loops… and declaration block-level inheritance. But I’ve changed my mind.

Don’t get me wrong, I love these constructs. I use them nearly every day in the SASS I write and I am incredibly thankful for the hard work that has gone into their creation and maintenance. Chris alone has probably saved me several weeks worth of work over the last four years through his contributions to SASS and Compass. I definitely owe him a beer (or three).

Ok, so if my issue is not with the idea of programmatically generating styles, why would I not want these to be part of CSS, the lingua franca for design on the Web? Well, it’s pretty simple: Converting all of these constructs into something that is actionable by the browser takes time and processing power. Someone has to pay that cost and I wouldn’t feel right passing that cost on to my end users if there are better options.

This is a topic I bring up often in my conference talks and workshops: Every decision we make affects the user experience in some way.

When we add another JavaScript library or plugin, it’s no big deal from our perspective. We tend to have fast connections and faster processors. For our users it’s another story: It’s one more thing to request. One more thing to download. One more script to parse. One more thing holding up page rendering. One more reason to leave our site and seek out a competitor who actually values their time.

When we hide an img in the small screen version of our responsive design using display: none, the cost to us is quite minimal. It’s just one little declaration. What’s the harm? But the cost to our end users is quite significant: Longer load times, slower performance, and (in some cases) in real dollars if they are on a metered data connection. And they don’t even get to see the image they paid for!

When we decide to build a site using a front-end JavaScript MVC framework, it can make the development process go so much faster for us and we can reduce our need for a robust back-end infrastructure. I mean everyone has JavaScript these days… the browser is the new VM. But when we do this, our users suffer because we don’t give their browsers real HTML. Instead we force them to download a hefty framework so we can move all of the processing we would normally handle on a much faster, dedicated server to their questionably-capable machine instead. Oh, and if the browser encounters an error while parsing or executing the JavaScript execution, they don’t get anything at all. Welcome to the Modern Web™!


When I look around, I see our community spending a lot of time coming up with new tools and techniques to make our jobs easier. To ship faster. And it’s not that I’m against efficiency, but I think we need to consider the implications of our decisions. And if one of those implications is making our users suffer—or potentially suffer—in order to make our lives easier, I think we need to consider their needs above our own.

So yes, I would love a world where preprocessors are unnecessary, but I would much rather spend a few seconds (or even a few minutes) transcompiling my SASS into CSS in order to save my users even a few milliseconds. It’s the same reason I optimize my images, minify my JavaScript, use Gzip, and lazy load design and experience enhancements only in contexts where they provide a real benefit.

Our users should never foot the bill for our convenience. We need to put their needs above our own.


Webmentions

  1. RT shawncampbell "users should never foot the bill for our convenience" aaron-gustafson.com/notebook/who-s… #webdev #ux
  2. 100% agree. Don't inflict your pain on your users. That only ends one way.

    twitter.com/AaronGustafson…

  3. Our users should never foot the bill for our convenience. We need to put their needs above our's aaron-gustafson.com/notebook/who-s… by @AaronGustafson
  4. “Our users should never foot the bill for our convenience. We need to put their needs above our own.” Old goldfrom @AaronGustafson. aaron-gustafson.com/notebook/who-s…
  5. AMEN!
    This post from @AaronGustafson more than just resonates with me, so should with you.

    Fun fact, he is the one to blame with my obsessions with web forms ever since I first saw him speak back in 2006 at my very first web conference. 😁 aaron-gustafson.com/notebook/who-s…
  6. Apologies for missing your piece.

    On the Variables (etc.) thing, there's a question of evidence. E.g., while new features add some cost to style recalc and layout, those phases aren't what's slowing down the traces i see. JS, however...
  7. No apologies needed. It was a long time ago and I included it merely to exemplify that the tradeoffs between developer convenience and our end users’ experiences continue to be an issue.
  8. Also, on the Web Components front, IE/Edge are scheduled to be the only browsers without them by EOY. As those are mostly desktop-oriented, you can more easily afford some small polyfill overhead there. Mobile browsers nearly all support WC.
  9. Of course there is that pesky JS dependency…

Shares

  1. Jeff L
  2. Michael Spellacy
  3. Ashley Joost
  4. mallory, alice & bob
  5. Data Value Strategy

Comments

Note: These are comments exported from my old blog. Going forward, replies to my posts are only possible via webmentions.
  1. vvtim

    While I agree with the premise of the article, I don't agree with the example of CSS variables / mixins / etc. It's likely to be very minimal overhead on first download and the browsers could store the processed version of the CSS in their cache the same way they do now -- you wouldn't be adding processing to every page load. If you're not passing proper cache headers on your CSS, then you've got bigger issues.

    1. Aaron Gustafson

      I would have to defer to people implementing the spec on the caching side of things. I don’t know that things like calc() and variables would be cached since they are dynamic (which is the whole point).

      1. vvtim

        I'm not sure why variables would be dynamic -- CSS doesn't have logic constructs or a program loop. It would essentially just be moving the preprocessor to the browser instead of the development stack.

        As far as calc(), yes, that's dynamic but would most likely IMPROVE user experience. All the current JavaScript implementations of calculating and resetting of widths would be replaced with a native implementation. It presents real value, not replacing something a developer can already do with a preprocessor.

        1. Aaron Gustafson

          Variables are not just runtime. They are dynamic and can be updated via JavaScript. Also they can be reinterpreted when class names change.

          1. vvtim

            So you're swapping a cached token's value rather than having to do a lot of DOM manipulation in JavaScript in today's reality. DOM manipulation is incredibly expensive, good chance the browser's CSS implementation will be leagues faster.

            1. Aaron Gustafson

              I’m sure it will be faster, but the fundamental questions still stands: even if the cost is only a few milliseconds, if I don’t really need it and I am simply saving myself time—by essentially only doing the same thing I do in a preprocessor today, not taking advantage of the dynamic stuff—it’s not a tradeoff I think we should be making.

              1. vvtim

                I agree completely, if there is a way to enhance end user performance MEANINGFULLY, by all means do so, but don't hold back standards that can improve performance for other use cases simply because it won't improve yours.

                If you're really arguing over a few milliseconds the first time someone ever visits your website, I think you're missing a lot of existing low hanging fruit in your performance-for-users crusade.

                1. Aaron Gustafson

                  If you're really arguing over a few milliseconds the first time someone ever visits your website, I think you're missing a lot of existing low hanging fruit in your performance-for-users crusade.

                  Certainly, which is why I mentioned minification and such. But it all adds up. A few hundred milliseconds here or there…

                  As Ovid said: “dripping hollows out the rock”.

  2. Ivan Wilson

    Ironically, referring to the initial point of the argument, your current employer (Microsoft) went down that road a couple of years (last decade ?) ago. Apart from the "dislike" of Microsoft, no one wanted it due to the high performance hit.

    Go figure.

    1. Aaron Gustafson

      Everything goes in cycles.