
Excellent overview of how to progressively enhance address entry forms using postal codes. It’s both a time saver and a data quality guard. Great stuff!
Excellent overview of how to progressively enhance address entry forms using postal codes. It’s both a time saver and a data quality guard. Great stuff!
I wish I could have been at CSUN to see Jen Strickland deliver this talk. It’s about the potential and actual harms inherent in web design and how to address them in our work.
Lots of juicy stats to share in your team discussions!
This was an interesting (and exhaustive) survey on what automation and AI might mean for the future of human agency. Some of the verbatims were quite insightful.
This passage from Micah Altman of MIT’s Center for Research in Equitable and Open Scholarship really resonated with me (emphasis mine):
Decisions determined by algorithms affecting our lives are increasingly governed by opaque algorithms, from the temperature of our office buildings to what interest rate we’re charged for a loan to whether we are offered bail after an arrest. More specifically complex, opaque, dynamic and commercially developed algorithms are increasingly replacing complex, obscure, static and bureaucratically authored rules.
Over the next decade and a half, this trend is likely to accelerate. Most of the important decisions affecting us in the commercial and government sphere will be ‘made’ by automated evaluation processes. For the most high-profile decisions, people may continue to be ‘in the loop,’ or even have final authority. Nevertheless, most of the information that these human decision-makers will have access to will be based on automated analyses and summary scores – leaving little for nominal decision-makers to do but flag the most obvious anomalies or add some additional noise into the system.
This outcome is not all bad. Despite many automated decisions being outside of both our practical and legal (if nominal) control, there are often advantages from a shift to out-of-control automaticity. Algorithmic decisions often make mistakes, embed questionable policy assumptions, inherit bias, are gameable, and sometimes result in decisions that seem (and for practical purposes, are) capricious. But this is nothing new – other complex human decision systems behave this way as well, and algorithmic decisions often do better, at least in the ways we can most readily measure. Further, automated systems, in theory, can be instrumented, rerun, traced, verified, audited, and even prompted to explain themselves – all at a level of detail, frequency and interactivity that would be practically impossible to conduct on human decision systems: This affordance creates the potential for a substantial degree of meaningful control.
This piece offers a solid introduction to the Accessible Perceptual Contrast Algorithm (APCA). This algorithm is part of WCAG 3.0 and will replace the contrast algorithm present in WCAG 2.x. It prioritizes perceived contrast and readability.
Here is an excellent testing tool as well: https://www.color-contrast.dev/
I had the great pleasure of delivering a talk about career opportunities for accessibility devs at axe-con earlier today. You can view the slides or watch the recording of this talk, but what follows is an approximation my talk’s content, taken from my notes and slides.
I love everything about this piece showcasing how people with disabilities are using open source to empower themselves and others.
GPT-4 is released. Really impressive improvements over GPT-3 and GPT-3.5.
The image description example with the VGA charger is really impressive. It will be really interesting to see how this new LLM can improve accessibility.
If you auto-build with every push, did you know you can tell Netlify to skip building a given commit? Just add “[skip netlify]” to your commit message.
That’s gonna be super useful for something I have planned!
This piece does a great job of relaying the reality of JavaScript failures on the web and makes a strong case for why progressive enhancement is such an important approach. Jason’ illustrations of who is affected by JavaScript failures and how are particularly compelling.
I also love how succinctly he nails this section:
So, if progressive enhancement is no more expensive to create, future-proof, provides us with technical credit, and ensures that our users always receive the best possible experience under any conditions, why has it fallen by the wayside?
Because before, when you clicked on a link, the browser would go white for a moment.
JavaScript frameworks broke the browser to avoid that momentary loss of control. They then had to recreate everything that the browser had provided for free: routing, history, the back button, accessibility features, the ability for search engines to read the page, et cetera iterum ad infinitum. Coming up with solutions to these problems has been the fixation of the JavaScript community for years now, and we do have serviceable solutions for all of these — but all together, they create the incredibly complex ecosystem of modern-day JavaScript that so many JavaScript developers bemoan and lament.
All to avoid having a browser refresh for a moment.
I love it when people share their approaches to progressive enhancement. I don’t know that I would take the same approach, but it was really interesting to read Jason’s rationale and to see the comparisons between his original React project and the progressively enhanced one.