A Fundamental Disconnect

Yesterday at BlendConf, Scott Hanselman gave a fantastically-entertaining keynote entitled “JavaScript, The Cloud, and the rise of the New Virtual Machine.” In it, he chronicled all of the ways Web development and deployment has changed—for the better—over the years. He also boldly declared that JavaScript is now, effectively, a virtual machine in the browser.

This is a topic that has been weighing on my mind for quite some time now. I’ll start by saying that I’m a big fan of JavaScript. I write a lot of it and I find it incredibly useful, both as a programming language and as a way to improve the usability and accessibility of content on the Web. That said, I know its limitations. But I’ll get to that in a minute.

In the early days of the Web, “proper” software developers shied away from JavaScript. Many viewed it as a “toy” language (and felt similarly about HTML and CSS). It wasn’t as powerful as Java or Perl or C in their minds, so it wasn’t really worth learning. In the intervening years, however, JavaScript has changed a lot.

Most of these developers first began taking JavaScript seriously in the mid ’00s when Ajax became popular. And with the rise of JavaScript MVC frameworks and their ilk—Angular, Ember, etc.—many of these developers made their way onto the Web. I would argue that this, overall, is a good thing: We need more people working on the Web to make it better.

The one problem I’ve seen, however, is the fundamental disconnect many of these developers seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.

I’ll explain.

If we’re writing server-side software in Python or Rails or even PHP, one of two things is true:

  1. We control the server environment: operating system, language versions, packages, etc.; or
  2. We don’t control the server environment, but we have knowledge of it and can author your program accordingly so it will execute as anticipated.

In the more traditional installed software world, we can similarly control the environment by placing certain restrictions on what operating systems our code can run on and what the dependencies for its use may be in terms of hard drive space and RAM required. We provide that information up front and users can choose to use our software or use a competing product based on what will work for them.

On the Web, however, all bets are off. The Web is ubiquitous. The Web is messy. And, as much as we might like to control a user’s experience down to the very pixel, those of us who have been working on the Web for a while understand that it’s a fool’s errand and have adjusted our expectations accordingly. Unfortunately, this new crop of Web developers doesn’t seem to have gotten that memo.

We do not control the environment executing our JavaScript code, interpreting our HTML, or applying our CSS. Our users control the device (and, thereby, its processor speed, RAM, etc.). Our users choose the operating system. Our users pick the browser and which version they use. Our users can decide which add-ons they put in the browser. Our users can shrink or enlarge the fonts used to display our Web pages and apps. And the Internet providers that sit between us and our users, dictating the network speed, latency, and ultimately controlling how—and what part of—our content makes it to our users.

All we can do is author a compelling, adaptive experience, cross our fingers, and hope for the best.

The fundamental problem with viewing JavaScript as the new VM is that it creates the illusion of control. Sure, if we are building an internal Web app, we might be able to dictate the OS/browser combination for all of our users and lock down their machines to prevent them from modifying those settings, but that’s not the reality on the open Web.

The fact is that we can’t absolutely rely on the availability of any specific technology when it comes to delivering a Web experience. Instead, we must look at how we construct that experience and make smarter decisions about how we use specific technologies in order to take advantage of their benefits while simultaneously understanding that their availability is not guaranteed. This is why progressive enhancement is such a useful philosophy.

The history of the Web is littered with JavaScript disaster stories. That doesn’t mean we shouldn’t use JavaScript or that it’s inherently bad. It simply means we need to be smarter about our approach to JavaScript and build robust experiences that allow users to do what they need to do quickly and easily even if our carefully-crafted, incredibly well-designed JavaScript-driven interface won’t run.


Webmentions

  1. Yeah, I cribbed from the blog post for the book ;-)
  2. Also, I agree, but enjoyed the notion of tossing one of your articles back at you.
  3. But what actually happened? It wasn't "the zeitgeist", nothing so VAGUE, I'll tell you that much. The answer is there was a fight and advocates for progressive enhancement lost. You can see the portents of doom in this post from Aaron Gustafson in 2014: aaron-gustafson.com/notebook/a-fun…
  4. @matthiasott @Aaron Thanks for putting this post back in front of my eyes Matthias.

    "The web is messy" and "illusion of control" - just 7 words from Aaron's article, but they convey *so* much.

Shares

  1. Adam Spelbring
  2. Sander van Dragt
  3. Zach Leatherman :11ty:

Comments

Note: These are comments exported from my old blog. Going forward, replies to my posts are only possible via webmentions.
  1. Ivan Wilson

    I have been thinking about the same thing also and I can give you my own answer: The new crop of developers are great at programming but not necessarily great at making content.

    The "making content" point is really important because this is were we create the experience. HTML is not an application language but used well can give more information than just the visual. Using CSS well and do more than just make a page/screen pretty. Those of us that have been working at this more than 5yrs(?) have adjusted and really now how to work this. And in some ways, developers are (again) slowly learning those lessons.

  2. Grant Swertfeger

    It makes me long for the days of wondering if the user had the latest version of Flash player but I agree that it's all for the better. The more we think about how the content "might" be delivered the more we realize all that we need to do in order to write better code. Of course the injection of Comcast's javascript might just kill the application altogether.

    1. Aaron Gustafson

      Absolutely! In fact, I noticed that the free wifi at the Atlanta Airport completely hosed this site. I am working on some ideas for how to harden web pages against injection. I welcome input.

      1. Grant Swertfeger

        You mean short of breaking the javascript engine on purpose?

        1. Aaron Gustafson

          Of course. I don’t think there’s much we can do about that, but my thinking is this: look for injected CSS files and then remove any nodes that match their selectors. That *should* kill most of the content injection stuff and may sideline most event handling. Anything they directly screw up via JavaScript is tougher to fix.

          1. Grant Swertfeger

            I'd love to explore this further. I think some simple listeners for scripts and stylesheets added to the DOM would go a long way. I can see this being quite useful.

  3. Ashley Chapokas

    How do Frameworks fit into the conversation? I am trying to steer away from them for now, as I'd like to focus on raw JS. Thoughts?

    1. Aaron Gustafson

      Goodonya for focusing on “vanilla JS”!

      Frameworks and libraries can be great, provided you understand how they fit into your process and how their guiding philosophy aligns with yours or your team’s. jQuery, for instance, can be used to progressively enhance pages with relative ease, so it aligns pretty well with what we do at Easy. AngularJS, however, tries to control everything on the page and most people who work with it (or similar frameworks) assume JavaScript is available (and functional) and, consequently, their pages will not load without it. That doesn’t align with progressive enhancement, so it doesn’t work for us.

      It is possible to make a JavaScript framework play well with progressive enhancement. This can be done by having the framework take over a page after it has been loaded from the server (classic ”Hijax“). Some folks have gone so far as to run the same (or nearly the same) JavaScript on the back-end via Node.js on the off-chance JavaScript is not available on the client. Both of these approaches make for a more robust website or application than simply using AngularJS, etc. right out of the box. But it requires more work because you are running counter to the philosophy of the framework.