Dispatches From The Internets

Default Isn’t Design

When one approach becomes “how things are done,” we unconsciously defend it even when standards would give us a healthier, more interoperable ecosystem. Psychologists call this reflex System Justification. Naming it helps us steer toward a standards-first future without turning the discussion into a framework war.

This whole piece is an excellent discussion about how tools can become an identity and why that’s a bad thing.


Identifying Accessibility Data Gaps in CodeGen Models

A pop-art style illustration of a wide chasm. On the left side of the chasm stands a small, cute, red robot, gazing to the right, across the abyss. On the right side of the chasm is his destination: a finish line flag. The flag reads “Accessible.”

Late last year, I probed an LLM’s responses to HTML code generation prompts to assess its adherence to accessibility best practices. The results were unsurprisingly disappointing — roughly what I’d expect from a developer aware of accessibility but unsure how to implement it. The study highlighted key areas where training data needs improvement.


Designing for Distress: Understanding Users in Crisis

In a distressing moment, it’s like you’re rushing to the airport — you’re just looking for help right now. When you aren’t distressed, it’s like you’re on vacation. You can take your time, you’re more open to exploring.

In a recent study, the VA learned a lot from users navigating acute distress — and why typical UX patterns fail. This is highly recommended reading for anyone working in the design space.


Why I’m Betting Against AI Agents in 2025 (Despite Building Them)

I think it’s really key to understand what AI is good for and where it falls short. Not just in terms of results, but in terms of externalities as well.

To that end, this is a piece worth reading. To me, the golden nugget is this (when discussing who will succeed with AI agents):

[T]he winners will be teams building constrained, domain-specific tools that use AI for the hard parts while maintaining human control or strict boundaries over critical decisions. Think less “autonomous everything” and more “extremely capable assistants with clear boundaries.”


Why AI Won’t Destroy Us with Microsoft’s Brad Smith

In this episode, Trevor Noah and Brad Smith talk about a lot of things, but I think the most prescient is their discussion of information bubbles and organizing around labels. Trevor astutely observes how the source of information often colors how we receive that information and whether we consider it or reject it out of hand. In today’s media ecosystem, the system of “in groups” and “out groups” creates deep division and makes us more susceptible to misinformation.




Passing Your CSS Theme to canvas

While working on a recent project I noticed an issue with a canvas-based audio visualization when I toggled between light and dark modes. When I’d originally set it up I was browsing in dark mode and the light visualization stroke showed up perfectly on the dark background, but it was invisible when viewed using the light theme (which I’d neglected to test). I searched around, but didn’t find any articles on easy ways to make canvas respond nicely to user preserences, so I thought I’d share (in brief) how I solved it.



Symbol Creator AI

About a year ago, the folks at Global Symbols pitched me on their vision for using image generation models to create new AAC symbols that fit thematically within an existing set. It was a truly compelling use case for generative AI and I was thrilled to fund their project through the AI for Accessibility grant program.

Fast forward to today and their project has launched! Please check it out and share it with any AAC users in your life!