Missed Connections
Earlier today, Stuart Langridge—who I worked with on WaSP’s DOM Scripting Task Force and have the utmost respect for—published a blog response to my last post. In it, he made some good points I wanted to highlight, but he also misunderstood one thing I said and managed to avoid addressing the core of my argument. As comments aren’t enabled on his site, I thought I’d respond here.
Let’s start with the good stuff:
Now, nobody is arguing that the web environment is occasionally challengingly different across browsers and devices. But a lot of it isn’t. No browser ships with a JavaScript implementation in which 1 and 1 add to make 3, or in which Arrays don’t have a length property, or in which the for keyword doesn’t exist. If we ignore some of the Mozilla-specific stuff which is becoming ES6 (things such as array comprehensions, which nobody is actually using in actual live code out there in the universe), JavaScript is pretty stable and pretty unchanging across all its implementations. Of course, what we’re really talking about here is the DOM model, not JavaScript-the-language, and to claim that “JavaScript can be the virtual machine” and then say “aha I didn’t mean the DOM” is sophistry on a par with a child asking “can I not not not not not have an ice-cream?”. But the DOM model is pretty stable too, let’s be honest. In things I build, certainly I find myself in murky depths occasionally with JavaScript across different browsers and devices, but those depths are as the sparkling waters of Lake Treviso by comparison with CSS across different browsers. In fact, when CSS proves problematic across browsers, JavaScript is the bandage used to fix it and provide a consistent experience — your keyframed CSS animation might be unreliable, but jQuery plugins work everywhere. JavaScript is the glue that binds the other bits together.
To be honest, I could not agree more, nor could I put it more elegantly. JavaScript, as a language, is relatively stable in terms of its core API. Sure, there are some gaps that JavaScript libraries have always sought to even out, but by and large what works in one browser will work in another. Assuming, of course, JavaScript is available… but let’s come back to that.
In this passage Stuart also highlights the quagmire that is CSS support. This is a great point to hammer home: we have no assurance that the CSS we write will be understood by or interpreted the same in every browser. This is why it is so important that we provide fallbacks like a hex value for that RGBa color we want to use. It pays have a solid understanding of how fault tolerance works because it helps us author the most robust code and ultimately leads to fewer browser headaches (and happier users). I devoted a huge portion of the CSS chapter in my book to the topic.
I also loved this passage:
Web developers are actually better than non-web developers. And Aaron explains precisely why. It is because to build a great web app is precisely to build a thing which can be meaningfully experienced by people on any class of browser and device and capability. The absolute tip-top very best “native” app can only be enjoyed by those to whom it is native. “Native apps” are poetry: undeniably beautiful when done well, but useless if you don’t speak the language. A great web app, on the other hand, is a painting: beautiful to experience and available to everybody. The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone. That’s why the Web’s good. The old software model, of something which only works in one place, isn’t the baseline against which the Web should be judged; it’s something that’s been surpassed. Software development is easiest if it only has to work on your own machine, but that doesn’t mean that that’s all we should aim for. We’re all still collaboratively working out exactly how to build apps this way. Do we always succeed? No. But by any measure the Web is the largest, most widely deployed, most popular and most ubiquitous computing platform the world has ever known. And its programming language is JavaScript.
I’ll admit I got a little teary-eyed when he said The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone.
. Stuart is bang on with this passage. Building the Web requires more of us than traditionally software development. In many ways, it asks us to be our best selves.
The one thing I take issue with is that last sentence, but again, I’ll come back to it.
In the middle, his post got a little off-track. Most likely it was because I was not as clear in my post as I could have been:
I am not at all sold that “we have knowledge of [the server environment] and can author your program accordingly so it will execute as anticipated” when doing server development. Or, at least, that’s possible, but nobody does. If you doubt this, I invite you to go file a bug on any server-side app you like and say “this thing doesn’t work right for me” and then add at the bottom “oh, and I’m running FreeBSD, not Ubuntu”. The response will occasionally be “oh really? we had better get that fixed then!” but is much more likely to be “we don’t support that. Use Ubuntu and this git repository.” Now, that’s a valid approach — we only support this specific known configuration! — but importantly, on the web Aaron sees requiring a specific browser/OS combination as an impractical impossibility and the wrong thing to do, whereas doing this on the server is positively virtuous. I believe that this is no virtue. Dismissing claims of failure with “well, you should be using the environment I demand” is just as large a sin on the server or the desktop as it is in the browser. You, the web developer, can’t require that I use your choice of browser, but equally you, the server developer, shouldn’t require that I use your particular exact combination of server packages either. Why do client users deserve more respect than server users? If a developer using your server software should be compelled to go and get a different server, how’s that different from asking someone to install a different web browser? Sure, I’m not expecting someone who built a server app running on Linux to necessarily also make it run on Windows (although wise developers will do so), but then I’m not really expecting someone who’s built a 3d game with WebGL to make the experience meaningful for someone browsing with Lynx, either.
Here’s what he was reacting to:
If we’re writing server-side software in Python or Rails or even PHP, one of two things is true:
- We control the server environment: operating system, language versions, packages, etc.; or
- We don’t control the server environment, but we have knowledge of it and can author your program accordingly so it will execute as anticipated.
In this passage, I was talking about software we write for ourselves, our companies, and our clients. In those cases we do—or at least we should—know the environment our code is running in and can customize our code or the server build if a particular package or feature is missing. In fact, this is such a consistent need that we now have umpteen tools that empower us make recipes of server requirements so we can quickly build, configure, and deploy servers right from the command line. I would never write server-side code for a client running Windows without testing it on a carbon-copy of their Windows server. That would be reckless and unprofessional.
If, however, I was writing code to sell or license to third parties, I’d fall into the second camp I outlined:
In the more traditional installed software world, we can similarly control the environment by placing certain restrictions on what operating systems our code can run on and what the dependencies for its use may be in terms of hard drive space and RAM required. We provide that information up front and users can choose to use our software or use a competing product based on what will work for them.
Lots of people who offer software in this way provide an overview of hardware and software requirements for using their product, and that’s fine. But I feel Stuart was incorrectly lumping the two camps together. He asks “Why do client users deserve more respect than server users?” I agree with the sentiment—the lack of requirements documentation for some third party server utilities is certainly appalling—but if I choose to try installing a given utility or program without knowing if it will work on my system, that’s my choice. And, moreover, failing installs of server-side utilities is a concern that I—a technical-savvy software developer—can readily deal with (or at least that I am competent enough to solve with Google’s help). I don’t think we can expect the same of the people who read our content, check their bank balances on our systems, and whose experience and capabilities may not be the same as ours.
Stuart brings his response to a close with the gloriously uplifting statement—[B]y any measure the Web is the largest, most widely deployed, most popular and most ubiquitous computing platform the world has ever known.
—before declaring, unequivocally, [I]ts programming language is JavaScript.
That sounds great, but it’s not entirely true.
The first part is dead-on: the Web absolutely is most popular and most ubiquitous computing platform the world has ever known
, but saying that the Web’s only programming language is JavaScript is a bit disingenuous. Yes, JavaScript is the de-facto programming language in the browser, but that’s only half of the equation: PHP, Perl, C++, Ruby, Java, Python… these (and many others) are the languages that drive the vast majority of the server-side processing that makes the dynamic Web possible. (Yes, JavaScript has made it onto the server side of things, but I don’t think that was what he was trying to say. Stuart, please correct me if I’m wrong.) These languages provide a fallback when JavaScript fails us. We need them.
The fact is that you can’t build a robust Web experience that relies solely on client-side JavaScript. And that’s what disappointed me about Stuart’s post: he completely avoided addressing this, the main thrust of my argument. While JavaScript may technically be available and consistently-implemented across most devices used to access our sites nowadays, we do not control how, when, or even if that JavaScript is ultimately executed. That’s the disconnect.
Any number of factors can bring our carefully-crafted JavaScript application to its knees. I mentioned a few in my post, but I’ll reiterate them here, along with a few others:
- Bugs: None of us write buggy code, of course, but even if we did, we have numerous safeguards that would prohibit that buggy code from making it into production. Right? Right!? But what about third-party code? I have gotten a buggy version of jQuery from the Google Ajax CDN before. And I’ve certainly come across buggy jQuery plugins. And what about the JavaScript being injected by other third party services? Advertising networks… social widgets… we test all of that code too, right? Any errors or conflicts in JavaScript code can cause all JavaScript execution to stop.
- Browser Add-ons: We can’t control which add-ons or plugins our users have installed on their browser, but each and every one has the ability to manipulate the DOM, insert CSS, and inject scripts. If we don’t code defensively, we can spend hours trying to replicate a bug report only to ultimately discover the person reporting it had an add-on installed that was causing the issue. I’ve been there. It sucks.
- Man-in-the-Middle Attacks: Back in the olden days, we used to have to worry about JavaScript being blocked at the firewall as a security threat. That issue has largely gone away, but we still run into similar issues today: Sky accidentally blocked jQuery for all of their UK subscribers when they mistakenly flagged the hosted version of jQuery as malware and filtered it out. And routers are capable of injecting code that can break our pages too: I wrote about Comcast doing it the other day and then experienced a similar issue with the Atlanta airport’s free Wi-Fi while on my way home from BlendConf. Sadly, unless we send everything via SSL, we can’t even control what code ultimately gets delivered to our users.
- Underpowered Hardware: Some devices just don’t have the RAM to store or processing power to execute large JavaScript frameworks. If we’re using one, we could be dead in the water. Oh, and iOS sandboxes in-app browsers and they run really slowly compared to the Safari browser (which is already pretty slow compared to desktop browsers). If someone opens a link to our site in the Twitter app or if we are using an app wrapper around our Web experience, the whole thing may… just… crawl.
- Still Loading: While our JavaScript is being downloaded, processed, and executed, it’s not running. So, if JavaScript is a requirement for any interaction, the site could appear frozen until the browser finishes dealing with it.
All of this adds up to JavaScript being the biggest potential single point of failure in our Web experience.
Again, it’s not that JavaScript is a bad thing; I love JavaScript and write it every day—some days it’s all I do. But when we write JavaScript, its critical that we recognize that we can’t be guaranteed it will run. We need a backup plan and that’s what progressive enhancement asks of us. If we do that, our bases are covered and we can sleep soundly knowing that our users are happy because they can do what they need to do, no matter what.
And I, for one, enjoy sleeping.
Note: I no longer use “native” in this context, but it remains in quoted material.
Webmentions
Likes
Shares