Dispatches From The Internets


Opportunities for AI in Accessibility

A child’s drawing of a cute red robot whose hands are hammers. The robot is centered.

In reading through Joe Dolson’s recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism he has for AI in general as well as the ways in which many have been using it. In fact, I am very skeptical of AI myself, despite my role at Microsoft being that of an Accessibility Innovation Strategist helping run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways and it can be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well.





What Google Should Really Be Worried About

The old computer programming adage “garbage in, garbage out” is going to ring even more true as search engine crawlers consume more and more empty calories in the form of AI-generated bullshit and misinformation.

The question is why: why do rings of fakes websites like these even exist?

Part of the answer is, of course, money. Fake websites can be used to sell real advertisements.


ShatGPT

This is an excellent post from Steve Faulkner on some of the issues with Large Language Models like ChatGPT, especially when it comes to accessibility. He clearly outlines three key areas where we are failing:

  1. The base UI is inaccessible (or barely accessible).
  2. Features are hidden from people using Assistive Technologies.
  3. Advice they give on making interfaces more accessible do more harm than good.

Considering content warnings in HTML

A photo of a cute stuffed animal monkey with its hands over its eyes. Camera is tightly framing its head. Its eyes are not visible as they are completely covered by its hands.

One of the features I really love about Mastodon is their first-class Content Warning feature. With one additional step, you can add any warning of your choice to your post and it will be hidden by default, showing only the content warning text. It’s a super-simple idea, but so powerful when it comes to reducing potential the likelihood of causing our readers to experience the kinds of trauma that could have severe consequences.



Google Bard AI Chatbot Raises Ethical Concerns From Employees

Why do companies release software before it’s safe? Chances are they actually consider their product to be their stock price rather than their software… yet another victim of the financialization of our economy.

One worker’s conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.” One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving “which would likely result in serious injury or death.”

Google launched Bard anyway. The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments, according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg. The Alphabet Inc.-owned company had pledged in 2021 to double its team studying the ethics of artificial intelligence and to pour more resources into assessing the technology’s potential harms. But the November 2022 debut of rival OpenAI’s popular chatbot sent Google scrambling to weave generative AI into all its most important products in a matter of months.