← Writing

Block laid off 4,000 staff last week

Block can ship code with fewer engineers. Healthcare can't ship fewer clinicians. At least, not yet.

AI in healthcareHealth plan strategy
Originally on LinkedIn ↗

Block laid off 4,000 people last week; 40% of its staff. The stock surged in after-hours trading, i.e., Wall Street likes AI-driven efficiency. CEO Jack Dorsey’s justification: “A significantly smaller team, using the tools we’re building, can do more and do it better.”

Can the same story play out in healthcare?

I spent the last few days digging into that question.

At a recent conference, a top CMS official, Chris Klomp, shared a bold vision: a future with “no human working on prior authorization, period.” His point wasn’t about replacing clinicians, as much as about eliminating the administrative friction. His goal is a system where technology handles the paperwork in the background, freeing humans to make the judgments that matter.

On the ground, that nuance is critical. Payers like Elevance are already automating approvals, but they keep denials in human hands. Why? A 2026 KFF analysis of 2024 data found that a staggering 80.7% of appealed Medicare Advantage denials are overturned. The cost of a wrong denial isn’t just administrative — it’s a patient’s health.

Block can ship code with fewer engineers. Healthcare can’t ship fewer clinicians. At least, not yet.

Two new studies show precisely why the parallel breaks down:

//1// When AI misses the nuance.

Last week, a Nature Medicine study of 960 scenarios found ChatGPT Health under-triaged 52% of true emergencies. It caught the textbook cases but faltered when nuanced clinical judgment was required — the very moments where a human expert shines.

//2// When AI can’t see the full picture.

An npj Digital Medicine study on AI scribes last week found audio-only models had an 81% accuracy rate for medication histories. The errors came from missing visual cues — like the pill bottle a patient holds up. A vision-enabled model, however, jumped to 98% accuracy. The lesson: the quality of AI output is defined by the quality of its inputs.

AI performs brilliantly in structured, data-rich, “textbook” scenarios. It struggles where context is everything, where judgment is subtle, and where the full picture isn’t in the training data.

But these lines will shift, and shift fast.

And I don’t think tracking this evolution can be relegated to a new job function. I think it needs to be the job of every single leader in healthcare.

Written March 2, 2026.
View on LinkedIn ↗