The Best Of The Internets

ShatGPT

This is an excellent post from Steve Faulkner on some of the issues with Large Language Models like ChatGPT, especially when it comes to accessibility. He clearly outlines three key areas where we are failing:

  1. The base UI is inaccessible (or barely accessible).
  2. Features are hidden from people using Assistive Technologies.
  3. Advice they give on making interfaces more accessible do more harm than good.


Google Bard AI Chatbot Raises Ethical Concerns From Employees

Why do companies release software before it’s safe? Chances are they actually consider their product to be their stock price rather than their software… yet another victim of the financialization of our economy.

One worker’s conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.” One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving “which would likely result in serious injury or death.”

Google launched Bard anyway. The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments, according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg. The Alphabet Inc.-owned company had pledged in 2021 to double its team studying the ethics of artificial intelligence and to pour more resources into assessing the technology’s potential harms. But the November 2022 debut of rival OpenAI’s popular chatbot sent Google scrambling to weave generative AI into all its most important products in a matter of months.



Save Our World

I wish I could have been at CSUN to see Jen Strickland deliver this talk. It’s about the potential and actual harms inherent in web design and how to address them in our work.

Lots of juicy stats to share in your team discussions!


The Future of Human Agency

This was an interesting (and exhaustive) survey on what automation and AI might mean for the future of human agency. Some of the verbatims were quite insightful.

This passage from Micah Altman of MIT’s Center for Research in Equitable and Open Scholarship really resonated with me (emphasis mine):

Decisions determined by algorithms affecting our lives are increasingly governed by opaque algorithms, from the temperature of our office buildings to what interest rate we’re charged for a loan to whether we are offered bail after an arrest. More specifically complex, opaque, dynamic and commercially developed algorithms are increasingly replacing complex, obscure, static and bureaucratically authored rules.

Over the next decade and a half, this trend is likely to accelerate. Most of the important decisions affecting us in the commercial and government sphere will be ‘made’ by automated evaluation processes. For the most high-profile decisions, people may continue to be ‘in the loop,’ or even have final authority. Nevertheless, most of the information that these human decision-makers will have access to will be based on automated analyses and summary scores – leaving little for nominal decision-makers to do but flag the most obvious anomalies or add some additional noise into the system.

This outcome is not all bad. Despite many automated decisions being outside of both our practical and legal (if nominal) control, there are often advantages from a shift to out-of-control automaticity. Algorithmic decisions often make mistakes, embed questionable policy assumptions, inherit bias, are gameable, and sometimes result in decisions that seem (and for practical purposes, are) capricious. But this is nothing new – other complex human decision systems behave this way as well, and algorithmic decisions often do better, at least in the ways we can most readily measure. Further, automated systems, in theory, can be instrumented, rerun, traced, verified, audited, and even prompted to explain themselves – all at a level of detail, frequency and interactivity that would be practically impossible to conduct on human decision systems: This affordance creates the potential for a substantial degree of meaningful control.




GPT-4

GPT-4 is released. Really impressive improvements over GPT-3 and GPT-3.5.

The image description example with the VGA charger is really impressive. It will be really interesting to see how this new LLM can improve accessibility.