{ I messed this one up, and it auto-posted without an actual post to go along with it. We learn … here’s the actual post on LI }
I saw this piece on AI in The Workplace in 2026 on Quartz, and then Judith Katz sent me the summary.
It’s a decent piece, nothing earth-shattering. The general point is that AI will help mid-level managers by handling routine tasks, freeing them up for more creative, human-centric activities. And that may even be true for some folks.
But it made me think of two points that are almost always absent from these discussions.
First, there is an under-appreciation of the amount of slop these systems will inevitably create. By slop, I am referring not just to hallucinations, but also to the mediocre, vaguely inaccurate, meaningless output that forms, say, the bottom 25% of what we get from LLMs. It’s someone’s job to improve / moderate / weed out this content, and, unfortunately, it is work that, if ignored, tends only to compound the problem.
Second, and this may be the more relevant insight, the article, like so much of the discourse I see, uses the term AI in a very inconsistent, very fluid way. Time and time again, the most successful, most compelling implementations covered by the AI umbrella are actually automation efforts, often with an LLM-fueled presentation layer at the end. These are not LLM interactions, they are algorithmic automation with a nice language layer to interact with.
Why does this matter?
Because automation challenges are solvable. Automation challenges are definable. Automation challenges are testable, reproducible, and measurable.
Automation challenges are also, usually, decidedly unsexy. They lack the glamor of the AI conversation, they don’t generate billions of dollars in angel investments, and they seem very distant from the sense of wonder so many feel at LLM based interactions.
So, sure, we may need to position our automation efforts as AI as part of an organizational political strategy, and we may need that LLM-fueled interface to build stakeholder enthusiasm. And we can even add our successes to a variety of arcane ROI calculations as part of a larger AI initiative.
The distinction matters because the practical skills that are needed for success in automation efforts are not fully congruent with those needed for AI/LLM implementations. There is overlap, of course, but I wonder if this sits at the center of the struggles I see in organizations generating real-world traction in their AI (but really automation) efforts: the teams they assemble lack the right skills partially because nobody is presenting, with extreme clarity, what the actual work in front of them is.